Latent Space: The AI Engineer Podcast

Extreme Harness Engineering for Token Billionaires: 1M LOC, 1B toks/day, 0% human code, 0% human review — Ryan Lopopolo, OpenAI Frontier & Symphony

73 min
Apr 7, 202611 days ago
Listen to Episode
Summary

Ryan Lopopolo from OpenAI discusses extreme harness engineering for AI agents, detailing how his team built a 1M+ line-of-code production application with zero human-written code using Codex, and introduces Symphony, an orchestration framework for managing multi-agent systems at enterprise scale.

Insights
  • Code is disposable when models are cheap and fast—focus shifts from code quality to agent capability and system architecture that enables autonomous operation
  • Human bottlenecks move from coding to review and decision-making; removing humans from synchronous loops requires building observability, testing, and guardrails into the codebase itself
  • Harness engineering (prompts, specs, skills, observability) matters more than model capability alone; the right scaffolding lets weaker models outperform stronger ones without proper structure
  • Enterprise AI deployment requires multi-stakeholder dashboards and safety specs customized per organization, not one-size-fits-all agent products
  • Software dependencies can be internalized and maintained by agents when code is cheap, reducing external maintenance burden and security surface area
Trends
Shift from human code review to post-merge review and automated agent-driven code quality enforcementSpecification-driven software distribution ('ghost libraries') as alternative to traditional open-source dependenciesMulti-agent orchestration frameworks (like Symphony) becoming critical infrastructure for scaling AI work beyond single-agent tasksCLI-first tool design optimized for agent consumption rather than human readabilityEnterprise AI platforms consolidating observability, safety, governance, and agent orchestration into unified dashboardsReasoning models enabling agents to operate without predefined scaffolds, requiring new approaches to agent steering and decompositionContinuous agent behavior analysis and skill distillation from session logs to improve team-wide agent performanceInternalization of software abstractions and dependencies as models improve, reducing reliance on external librariesIntegration of semantic layers and data ontologies into agent systems to enable autonomous data understanding and queryingShift toward 'on-policy' harness design that works within model distribution rather than restricting outputs post-hoc
Companies
OpenAI
Ryan Lopopolo's employer; develops Codex, GPT-5 models, and Frontier enterprise platform for agent deployment
Stripe
Mentioned as part of Ryan's background experience before joining OpenAI
Brex
Mentioned as part of Ryan's background experience in fintech before OpenAI
Snowflake
Mentioned as part of Ryan's background experience in data platforms
Citadel
Mentioned as part of Ryan's background experience in quantitative finance
GitHub
Used for PR management and version control; agents interact via GitHub CLI for autonomous merging
Linear
Issue tracking system used by Ryan's team for task management and agent-driven ticket creation
Slack
Communication platform integrated with agents for notifications, feedback, and autonomous posting
Temporal
Workflow orchestration platform mentioned as inspiration for Symphony's process supervision model
Grafana
Dashboard platform used for observability; agents autonomously author and update Grafana dashboards
Datadog
Monitoring platform mentioned as ongoing cost despite agent-driven infrastructure management
Prometheus
Metrics collection system used in local development stack for agent observability
Jaeger
Distributed tracing system used for observability in agent-managed infrastructure
Electron
Framework used to build the 1M LOC native application managed by agents
React
UI framework used in the Electron application built and maintained by agents
Bazel
Build system evaluated and adopted for fast incremental builds required by agent workflows
Turbo
Monorepo build tool evaluated as part of optimizing build times for agent efficiency
NX
Monorepo build tool evaluated and ultimately selected for fast builds in agent-driven development
Elixir/BEAM
Programming language and runtime chosen for Symphony due to native process supervision capabilities
Lovable
Competitor platform for zero-to-one product scaffolding using AI agents
People
Ryan Lopopolo
Guest discussing harness engineering, agent orchestration, and enterprise AI deployment at scale
Brett Taylor
Mentioned as engaging with Ryan's harness engineering article and commenting on software dependencies
Andrej Karpathy
Quoted for the phrase 'English is the hottest new programming language'
Jared Palmer
Mentioned in context of monorepo build tool evaluation and design philosophy
Quotes
"Code is disposable. You're doing a lot of review. A lot of the article talks about how you want to rephrase everything is prompting. Everything is what the agent can't see. It's kind of garbage, right?"
Ryan Lopopolo~15:00
"The only fundamentally scarce thing is the synchronous human attention of my team. There's only so many hours in the day."
Ryan Lopopolo~18:00
"You have to step back, right? Like you need to take a systems thinking mindset to things and constantly be asking, where is the agent making mistakes? Where am I spending my time?"
Ryan Lopopolo~8:00
"The models fundamentally crave text. So a lot of what we have done here is figure out ways to inject text into the system."
Ryan Lopopolo~35:00
"You can just codex things. You can just prompt things. It's really glorious future we live in."
Host~95:00
Full Transcript
I do think that there is an interesting space to explore here with Codex, the harness as part of building AI products, right? There's a ton of momentum around getting the models to be good at coding. We've seen big leaps in the task complexity with each incremental model release where if you can figure out how to collapse a product that you're trying to build a user journey that you're trying to solve into code, it's pretty natural to use the Codex harness to solve that problem for you. It's done all the wiring and lets you just communicate in prompts to let the model cook. You have to step back, right? Like you need to take a systems thinking mindset to things and constantly be asking, where is the agent making mistakes? Where am I spending my time? How can I not spend that time going forward and then build confidence in the automation that I'm putting in place so I have solved this part of the SDLC? Alright, we're in the studio with Ryan Lopopolo from OpenAI. Welcome. Hi. Thanks for visiting San Francisco and thanks for spending some time with us. Yeah, thank you. I'm super excited to be here. You wrote a blogbuster article on harness engineering. It's probably going to be the defining piece of this emerging discipline. Thank you. It is fun to feel like we've defined the discourse in some sense. Let's contextualize a little bit. This first podcast you've ever done. Yes. And thank you for spending with us. Where is this coming from? What team are you in? All that jazz. Sure, sure. I work on Frontier product exploration, new product development in the space of OpenAI Frontier, which is our enterprise platform for deploying agents safely at scale with good governance in any business. And the role of the team has been to figure out novel ways to deploy our models into packaged and products that we can sell as solutions to enterprises. And you have a background. I'll just squeeze it in there. Snowflake, Brex, Stripe, Citadel. Yes. Yes. So then the legs, any kind of customer in entire life? Yes. The exact kind of customer that you want to it. So I'll say I was actually, I didn't expect the background. When I look at your Twitter, I'm seeing the office there, stuff like this. So you've got the mindset of like full send AI coding, stuff about Slop, like buckling in your laptop on your Waymos. And then I look at your profile. I'm like, oh, you're just like, you're correct in the other end. Oh, perfect mix. Perfect mix. It's quite fun to be AI maximalist. If you're going to live that persona, OpenAI is the place to do it. And it's sort of token. That's what you say. Yeah. It certainly helps that we have no rate limits internally. And I can go, like you said, full send at this thing. Yeah. Yeah. So the OpenAI Frontier and you're a special team within OpenAI Frontier. We had been given some space to cook, which has been super, super exciting. And this is why I started with kind of a out there constraint to not write any of the code myself. I was figuring if we're trying to make agents that can be deployed into end enterprises, they should be able to do all the things that I do. And having worked with these coding models, these coding harnesses over six, seven, eight months, I do feel like the models are there enough, the harnesses are there enough where they're isomorphic to me in capability and the ability to do the job. So starting with this constraint of I can't write the code meant that the only way I could do my job was to get the agent to do my job. And like just a bit of background before that, this is basically the article. So what you guys did is five months of working on an internal tool, zero lines of code, over a million lines of code in the total code base. You say it was Senex faster and you would have, if you had done it by end. So that was the mindset going into this, right? That's right. Started with some of the very first versions of Codex CLI with the Codex mini model, which was obviously much less capable than the ones we have today, which was also a very good constraint, right? Quite a visceral feeling to ask the model to build you a product feature and it just not being able to assemble the pieces together, which kind of defined one of the mindsets we had for going into this, which is whenever the model just cannot, you always pop open the task, double click into it and build smaller building blocks that then you can reassemble into the broader objective. And it was quite painful to do this, honestly. The first month and a half was 10 times slower than I would be. But because we paid that cost, we ended up getting to something much more productive than any one engineer could be. Because we built the tools, the assembly station for the agent to do the whole thing. But yeah, so onward to GPT-5, 51525354 to go through all these model generations and see their kind of quarks and different working styles also meant we had to adapt the code base to change things up when the model was revved. One interesting thing here is 5-2, the Codex harness at the time, did not have background shells in it, which means we were able to rely on blocking scripts to perform long horizon work. But with 5-3 and background shells, it became less patient, less willing to block. So we had to retool the entire build system to complete in under a minute. And this is not a thing I would expect to be able to do in a code base where people have opinions. But because the only goal was to make the agent productive, over the course of a week, we went from ABA Spoke, make file build to Bazel to Turbo to NX, I just left it there because builds were fast at that point. Interesting. Talk about Turbo to NX, that's interesting, because that's the other direction that other people have been doing. Ultimately, I have not a lot of experience with actual front end repo architecture. You're talking that Josiah builds the same sky, so I'm like, I know the NX team, I know Turbo, from Jared Palmer, and I'm like, yeah, that's an interesting comparison. The hill we were climbing, right, was make it fast. Is there a micro-front end involved? It's like, how complex? React, we're talking. Electron, base, single app sort of thing. And must be under a minute, that's an interesting limitation. I'm actually not super familiar with the background shell stuff. Probably was talked about in the 5-3 release. It basically means that Codex is able to spawn commands in the background and then go continue to work while it waits for them to finish. So it can spawn an expensive build and then continue reviewing the code, for example. And this helps it be more time efficient for the user invoking the harness. I guess, and just to really nail this, what does one minute matter? Like, why not five? Okay, we want the inner loop to be as fast as possible. One minute was just a nice round number, and we were able to hit it. And if it doesn't complete, it kills it or something? No, we just take that as a signal that we need to stop what we're doing, double click, decompose the build graph a bit to get the time back under so that we can enable the agent to continue to operate. It's almost like a ratchet, like a forcing build time discipline. Because if you don't, it'll just grow and grow. That's right. Can you mention that? The software I work on currently is at 12 minutes. It sucks. This has been my experience with platform teams in the past where you have an envelope of acceptable build times and you let it go up to breach and then you spend two, three weeks to bring it back down to the lower end of the average loop. But because tokens are so cheap and so insanely parallel with the model, we can just constantly be gardening this thing to make sure that we maintain these invariants, which means there's way less dispersion in the code and the SDLC, which means we can simplify in a way and rely on a lot more invariants as we write the software. Will be, you mentioned in your article like humans became the bottleneck, right? You kicked off as a team of three people. You're putting out a million line of code, like 1500 PRs. Basically, what's the mindset there? So as much as code is disposable, you're doing a lot of review. A lot of the article talks about how you want to rephrase everything is prompting. Everything is what the agent can't see. It's kind of garbage, right? You shouldn't have it in there. So what's like the high level of how you went about building it and then how you address, OK, humans are just PR review. Like how is human in the loop for this? We've moved beyond even the humans reviewing the code as well. Most of the human review is post-merge at this point, but. It's not even reviewed. Let's just make ourselves happy by giving it. Fundamentally, the model is trivially paralyzable, right? As many GPUs and tokens as I am willing to spend, I can have capacity to work on the code base. The only fundamentally scarce thing is the synchronous human attention of my team. There's only so many hours in the day. We have to eat lunch. I would like to sleep, although it's quite difficult to stop poking the machine because it makes me want to feed it. You have to step back, right? Like you need to take a systems thinking mindset to things and constantly be asking, where is the agent making mistakes? Where am I spending my time? How can I not spend that time going forward and then build confidence in the automation that I'm putting in place so I have solved this part of the SDLC? And usually what that has looked like is like we started needing to pay very close attention to the code because the agent did not have the right building blocks to produce modular software that decomposed appropriately, that was reliable and observable and actually accrued a working front end in these things, right? So in order to not spend all of our time sitting in front of a terminal at most doing one or two things at a time, invested in giving the model that observability, which is that graph that's been opposed here. Let's walk through this. Traces. Let's go first. We started with just the app and the whole rest of it from vector through to all these login metrics APIs was, I don't know, half an afternoon of my time. We have intentionally chosen very high level, fast developer tools. There's a ton of great stuff out there now. We use Mies a bunch, which makes it trivial to pull down all these go written Victoria stack binaries in our local development, tiny little bit of Python glue to spin all these up and off you go. One neat thing here is we have tried to invert things as much as possible, which is instead of setting up an environment to spawn the coding agent into instead we spawn the coding agent like that's the entry point, just codecs. And then we give codecs via skills and scripts the ability to boot the stack if it chooses to. And then tell it how to set some end variables. So the app and local dev points at the stack that it has chosen to spin up. And this, I think, is like the fundamental difference between reasoning models and the four ones and four rows of the past where these models could not think so you had to put them in boxes with a predefined set of state transitions. Whereas here we have the model, the harness be the whole box and give it a bunch of options for how to proceed with enough context for it to make intelligent choices. So like a lot of that is around scaffolding, right? Previous agents you would define a scaffold, it would operate in that loop, try again. That's pivoted off from when we've had reasoning models, they're seeming to perform better when you don't have a scaffold. And you go into like niches here too, like your spec.md and like having a very short agent.md Yes. Yeah. So you even lay out what it is here. I like the table of contents. Like stuff like this, it really helps guide people because everyone's trying to do this. This structure also makes it super cheap to put new content into the repository to steer both the humans and the agents. You reinvented skills, right? One big agent. Skills from first principles. Skills that not exist when we started doing this. You have a short one, one hundred line overall table of contents and then you have little skills, right? Core belief, MD, set tracker. Yeah. The skill is over. The tracker and the quality score are pretty interesting because this is basically a tiny little scaffold, like a markdown table, which is a hook for Codex to review all the business logic that we have to find in the app, assess how it matches all these documented guard rails and propose follow up work for itself. Before beads and all these ticketing systems, we were just tracking follow up work as notes in a markdown file, which we could spawn an agent on a cron to burn down. There's this really neat thing that like the models fundamentally crave text. So a lot of what we have done here is figure out ways to inject text into the system, right? When we get a page because we're missing a timeout, for example, I can just add Codex in Slack on that page and say, I'm going to fix this by adding a timeout. Please update our reliability documentation to require that all network calls have timeouts. I have not only made a point in time fix, but also like durably encoded this process knowledge around what good looks like. We give that to the root coding agent as it goes and does the thing, but you can also use that to distill tests out of or a code review agent, which is pointed at the same things to narrow the acceptable universe of the code that's produced. I think one of the concerns I have with that kind of stuff is you think you're making the right call by making it persisted for all time across everything. Yes, but then you didn't think about the exceptions that you need to make, right? And then you have to roll it back. Part of it is also some sense you can follow your instructions to it's somewhat a skill, right? So it determines when it uses the tools, right? Like it's not like it'll run at every call. It'll determine when it wants to check quality score, right? Yeah. And we do in the prompts we give these agents allow them to push back. When we first started adding code review agents to the PR, it would be Codex CLI locally writes the change and pushes up a PR on those PR synchronizations, a review agent fires, it posts a comment, we instruct Codex that it has to at least acknowledge and respond to that feedback. And initially the Codex driving the code author was willing to be bullied by the PR reviewer, which meant you could end up in a situation where things were not converging. So we had to add more optionality to the prompts on both of these things, right? The reviewer agents were instructed to bias toward merging the thing to not surface anything greater than a P2 in priority. We didn't really define P2, but we gave it a framework within which to score its output. And then greater than P0 is worse, right? Yes. The secret structure P2 is. P0 is you will need to code place if you merge this thing, right? But also on the code authoring agents side, we also gave it the flexibility to either defer or push back against review feedback, right? It happens all the time, right? Like I happen to notice something and leave a code review, which could blow up the scope by a factor of two. I usually don't mean for that to be addressed exactly in the moment. It's more of an FYI, file it to the backlog, pick it up in the next fix it week sort of thing. And without the context that this is permissible, the coding agents are going to bias toward what they do, which is following instructions. Yeah. I do want to check in on a couple of things, right? Sure. All the coding review agent, it can merge autonomously. I think that's something that a lot of people are uncomfortable with. And you have a list here of how much agents do it. They do product coding tests, CI configuration and release tooling, internal dev tools, documentation, eval harness, review comments, scripts that manage the repository itself, production dashboard definition files, like everything. Yes. And so they're just all churning at the same time. Is there like a big core that any human on the team pulls to stop everything? Because we are building a native application here, we're not doing continuous deploy. So there is still a human in the loop for cutting the release branch. I see. We require a bless human approved smoke test of the app before we promote it to distribution, these sorts of things. So you're working on the app, you're not building infrastructure where you have like nines of reliability, that kind of stuff. That's correct. That's correct. OK. And also like full recognition here that all of this activity took in a completely green field repository. There should be no, I'm sure, that this applies generally to. This is a production thing you're going to ship to customers. Of course. Yeah, of course. So this is real. And like one of the things there is you mentioned you started this as a repo from scratch. The onboarding first month or so was pretty. It was like working backwards, right? Yeah. And then you got to work with the system. And now you're at that point where you're very autonomous. I'm curious like, OK, so what, how human in the loop is it? So what are the bottlenecks that you wish you could still automate? And part of that is also like, where do you see the model trajectory improving and offloading more human in the loop? We just got 5.4. It's a really good. Fantastic model, by the way. Yeah, yeah. It's the first one that's merged top tier coding. So it's codex level coding and reasoning. So general reasoning both in one model. So and computer vision. Now we're doing. Now with the Z4, I can just have codex right the blog post. Whereas for this one, I had to balance between chat. Oh, I need to. I might be out of a job. Oh my God. I don't know. You just gave me an idea for a completely AI newsletter that 5.4 could do. Yeah. I get it now. This sort of thing is just one example of closing the loop, right? Like the dashboard thing you mentioned. We have codex authoring the JSON for the Grafana dashboards and publishing them and also responding to the pages, which means when it gets the page, it knows exactly which dashboards are defined and what alerts. What alert was triggered by which exact log in the code base because all of the stuff is collated together. It has to own everything. Yes. Yeah. And it means that if we have an outage that did not result in a page, it has the existing set of dashboards available to it. It has the existing set of metrics and logs and can figure out where the gaps in the dashboard are or in the underlying metrics and fix them in one go. In the same way, you would have a full stack engineer be able to drive a feature from the back end all the way to the front end. So it seems like a lot of the work you guys had to do was you as a small team are fully working for a way that the model wants the software to be written. It's like less human legible for better code legibility, agent legibility. How do you think that affects broader teams? So one at open AI, do you liaison? Like this is how software should be written. Like I can imagine, say you join a new team with this methodology, this mindset. There's ways that teams do code review teams, write code, like teams are structured and a lot of it is for human legibility. Should we all swap? Like how does this play back one broader into open AI and then like broader into the software engineer? Is it like teams that pick this up? Well, it's pretty drastic, right? You have to make a pretty big switch. Should they just full send? Yeah. The mindset is very much that I am removed from the process, right? I can't really have deep code level opinions about things. It's as if I'm group tech leading a 500 person organization. Yeah. Like it's not appropriate for me to be in the weeds on every PR. This is why that post merge code review thing is like a good analog here, right? Like I have some representative sample of the code as it is written. I have to use that to infer what the teams are struggling with, where they could use help, where they're already moving quickly and I can pivot my focus elsewhere. Yeah. So I don't really have too many opinions around the code as it is written. I do, however, have a command based class, which is used to have repeatable chunks of business logic that comes with tracing and metrics and observability for free. And the thing to focus on is not how that business logic is structured, but that it uses as primitive because I know that's going to give leverage by default. Yeah. Yeah. Back to that sort of systems thinking. And you have part of that in your blog post enforcing architecture and taste, how you set boundaries for what's used. There's also a section on redefining engineering and stuff. But yeah, it's just it's interesting to hear. And as the models have gotten better, they have gotten better at proposing these abstractions to unblock themselves, which again, lets me move higher and higher up the stack to look deeper into the future on what ultimately block the team from shipping. Yeah. You mentioned, so you, this is primarily a, it's like a one million line of code based electron app, but it manages its own services as well. So it's like a back end for front end type thing. We do have a back end in there, but that's hosted in the cloud. This sort of structure is actually within the separate main and renderer processes within the election. That's just how electron works. Yeah. Of course. So I've also treated like MVC style decomposition with the same level of rigor, which has been very fun. I have a fun pun. This is a tangent MVC is model view controller at any sort of full stack web dev knows that, but my AI native version of this is model view claw. Claws of the harness. That's right. That's right. I do think that there is an interesting space to explore here with codex, the harness as part of building AI products, right? There's a ton of momentum around getting the models to be good at coding. We've seen big leaps in like the task complexity with each incremental model release where if you can figure out how to collapse a product that you're trying to build a user journey that you're trying to solve into code, it's pretty natural to use the codex harness to solve that problem for you. It's done all the wiring and lets you just communicate in prompts to let the model cook. Yeah, it's been very fun. And there's also a very engineering legible way of increasing the plastic. Right? Yeah. Just give, just give the model scripts, the same scripts you would already build for yourself. Yeah. Yeah. So for listeners, this is Ryan saying that software engineering or coding agents will eat knowledge work, like the non coding parts that you would normally think, Oh, you have to build a separate agent for it. No, start with coding agent and go out from there, which open claw has. So it's a pie under the hood. Yes. You should define your task in code. Everything is a good agent. By the way, since I brought it up, it's probably the only place we bring it up. Is any open claw usage from you? Any? No, no, not for me. I don't have any spare Mac mini's rattling around my house. You can afford it. No, I just, I'm curious if it's changed anything in opening I yet, but it's probably early days. And then the other thing I want to pull on here is like you mentioned ticketing systems and you mentioned PRs. And I'm wondering if both those things have to go away or be reinvented for this kind of coding. So the get itself and it's like very hostile to multi agents. Yeah, we make very heavy use of work trees. But like, even then, like I just did a drop the podcast yesterday with cursor saying then they said they're getting rid of work trees because it still has too many merge conflicts. It's still on it to unintuitive, but go ahead. The models are really great at resolving merge conflicts. Yeah. And to get to a state where I'm not synchronously in the loop in my terminal, I almost don't care that there are merge. Yeah, it's disposable. Yeah. We invoke a dollar land skill and that coaches codex to push the PR, wait for human and agent reviewers, wait for CI to be green, fix the flakes. If there are any merge upstream, if the PR comes into conflict, wait for everything to pass, put it in the merge queue, deal with flakes until it's in main and this is what it means to delegate fully. Right. This is in a very large model repo, probably a significant tax on humans to get PRs merged, but the agent is more than capable of doing this. And I really don't have to think about it other than keep my laptop open. Yeah. I used to be much more of a control freak, but now I'm like, yeah, actually you could do a better job at this and me. Yeah. We have the right context. Yes. Anything else in harness engine general, just this piece, I just want to make sure we, I think one thing that I maybe didn't make super clear in the article that I heard on Twitter as an interest that's right. I'm going to them. What's the chatter and what's your response? Ultimately, all the things that we have encoded in docs and tests and review agents and all these things are ways to put all the non-functional requirements of building high scale, high quality, reliable software into a space that prompt injects the agent. We either write it down as docs. We add lints where the error messages tell how to do the right thing. So the whole meta of the thing is to basically tease out of the heads of all the engineers on my team, what they think good looks like, what they would do by default or what they would coach a new hire on the team to do, to get things to merge. And that's why we pay attention to all the mistakes, mistakes that the agent makes. Right. This is code being written that is misaligned with some as yet not written down non-functional requirement. Sorry, what did the online people misunderstand or? No, what do you read? Somebody just literally said that. I was like, oh, yeah. Okay. This is the thing. This is what I've been doing. You read the idea. I see. Interesting. What other neat thing, which I totally did not expect is folks were just taking the link to the article and giving it to Pi or Codex and say, make my repo this. You achieve a whole recursion. And it was wildly effective. Really? It was wildly effective. No way. Just actually is something I tried with PIFOR yesterday. I didn't have that much time. I was like, I'll speak at something and this is one of my things. I was like, okay, I have this article. Can we just scaffold out what it would be like to run this? And I did it first as that. And then I was like, okay, let me take another little side repo and say, okay, if I was to fully automate this, like this, I haven't written a line of code. It's like a whole set, right? It's like the side thing. I'm doing voice, TTS. I'm just like, slobbing out, whatever. It's nothing production. I'm like, how would I make this like this? And it's actually like a really good way. It's like a good way to learn what could be changed, what could be like, it's just a good analyzing, right? You give it all the code, you give it all the context, you give it the article and it walks you through it very well. That's right. That's right. I guess one more thing before we go to Symphony is I wanted to cover Brett Taylor's response. We had him on the show. He is your chairman, which is a while. Yeah. That he's reading your articles as well and like getting engaged in it. He says software dependencies are going away, basically. They can just be like, rendered. Yes. Response. 100%. 100% agree. You still prompt you out, you still pay data, dog. You still pay 10.40. Thank you. Yep. The level of complexity of the dependencies that we can internalize is, I would say low, medium right now, just based on model capability. What is medium? I would say like a couple thousand line dependency is a thing that we could in-house no problem caught in an afternoon of time. One neat thing about it is like probably most of that code you don't even need. Like by in-housing and abstraction, you can strip away all the generic parts of it and only focus on when I need to enable the specific thing as you're building. I've been calling this to end the bullshit plugins. Yeah. Because there's so much that when I publish an open source thing, I want to accept everything and be liberal and want to accept this is Postel's law. But that means there's so much bloke. There's so much overhead. One other neat thing about this too is when we deploy Codex security on the repo, it is able to deeply review and change the internalized dependencies in a much lower friction way than it would be to like push patches upstream, wait for them to be released, pull them down, make sure that's compatible with all the transitives I have in my repo and things like that. So it's also much lower friction to internalize some of these things if code is free because the tokens are cheap sort of thing. I think like the only argument I have against this is basically scale testing, which obviously the larger pieces of software like Linux, MySQL, he calls up even the data, thousand temporals and then maybe security testing where classically I think is it Linux torvoles and said the security open source is the best disinfectant. Many eyes. Many eyes. And if in line your dependencies and code them up, you're going to have to relearn mistakes from other people. Yep. And to internalize that dependency, you're back to zero and you have to start reassembling all those bits and pieces to have high confidence in the code as it is written. Yeah. Even part of the first intro of this, you basically mentioned like everything was written by codecs, including internal tooling, right? You know, internal tooling, like when you're visualizing what's going on, it's writing it for. Yeah, I'm building internal tooling, so I eat now and like I just show them off and they're like, how long did you spend and I didn't spend any time. I just prompted it. Very funny story here. Yeah, go ahead. We had deployed our app to the first dozen users internally. I had some performance issues, so we asked them to export a trace for us, get a tarball, gave it to our on-call engineer, and he did a fantastic job of working with codecs to build this beautiful local DevTool next.js app that you drag and drop the tarball in and it visualizes the entire trace. It's fantastic. Took an afternoon, but none of this was necessary because you could just spin up codecs and give it the tarball and ask the same thing and get the response immediately. So in a way, optimizing for human legibility of that debugging process was wrong. It kept him in the loop unnecessarily when instead he could have just like codecs cooked for five minutes and gotten the same. Yeah, you're going to find your instincts here of this is how we used to do it or this is how I would have used to solve it. Yeah. In this local observability stack, like sure, you can deploy Yeager to visualize the traces, but I wouldn't expect to be looking at the traces in the first place because I'm not going to write the code to fix them. Yeah. So basically it needs to be like this kind of house stack and owning the whole loop. I think that is very well established and it sounds like you might be like sharing more about that in the future. Right. Yeah. I think we're excited to do, we're going to talk about symphony in a little bit, but like the way we distributed it as a spec, which I think folks are calling ghost libraries on Twitter. This is like a such a cool name. It does mean it becomes much cheaper to share software with the world. Right. You define a spec, how you could build your own, specifying as much as is required for a coding agent to reassemble it locally. The flow here is very cool. Like we have taken all the scaffolding that has existed in our proprietary repo, spun up a new one, ask codex with our repo as a reference, write the spec. We tell it, spin up a T-Mux, spawn a disconnected codex to implement the spec, wait for it to be done, spawn another codex and another T-Mux to review the spec review the implementation compared to upstream and update the spec. So it diverges less and then you just loop over and over Ralph style until you get a spec that is with high fidelity able to reproduce the system as it is. It's fantastic. And you're basically, you're not really adding any of your human bias in there. Right. That's correct. A lot of times people write a spec and be like, okay, I think it should be done this way and you'll riff on something and it's an all day agent could have just handled it like you're still scaffolding in a sense. Right. I want it done this way. It can determine that spec better. That's right. That's right. Part of me, I've been working a lot on evals recently and part of me is wondering if an agent can produce a spec that it cannot solve. Is it always capable of things that he can imagine or can you imagine things that it is impossible to do? I think with symphony, we, there's like this, this is access where you have things that are easier, hard or established or new. Right. And I think things that are hard and new is still something that the models need humans, yeah, drive. Yeah. But I think those other quadrants are largely salt given the right scaffold and the right thing that's going to drive the agent to completion. It's crazy that it's all, but it means that the humans, the ones with limited time and attention get to work on the hardest stuff. Like the problems where it's pure white space out in front or like the deepest refactorings where you don't know what the proper shape of the interfaces are. And this is where I want to spend my time because it lets me set up for the next level of scale. Yeah. Yeah. Amazing. Let's introduce symphony. I think we've been mentioning it every now and then. Elixir. Interesting option. Yeah. Yeah. Again, like the elixir manifestation here is just a derivative. Is it a model chosen? Yeah. Yeah. And it chose that because the process supervision and the gen servers are super amenable to the type of process orchestration that we're doing here. You are essentially spinning up little daemons for every task that is in execution and driving it to completion, which means the mall gets a ton of stuff for free by using elixir and the beam. I had to go do a crash course in beam and elixir. And I think most people are not operating at that scale of concurrency where you need that, but it is a good mental model for the resume ability and all those things and these are things I care about. But tell me the story, the origin story of symphony, what do you use it for? Is this how did it form maybe any abandoned paths that you didn't take? At the end of December, we were at about three and a half PRs per engineer per day. This was before five to came out in the beginning of January. Everyone gets back from holiday with five to and no other work on the repository. We were up in the five to 10 PRs per day per engineer. And I don't know about y'all, but like this is very taxing to constantly be switching like that. Like I was pretty capped out at the end of the day. Again, where are the humans spending their time? They're spending their time. Context switching between all these active T-Mux panes to drive the agent forward. Yeah, no way. Yeah. So let's again, build something to remove ourselves from the loop. And this is what frantic sprint adapter here to find a way to remove the need for the human to sit in front of their terminal. So a lot of experimentation with dev boxes and automatically spinning up agents like it seems like a fantastic end state here where my life is beach. I open loop twice a day and say yes, no to these things. And this is again a super, super interesting framing for how the work is done because I become more latency and sensitive. I have way less attachment to the code as it is written. Like I've had close to zero investment in the actual authorship experience. So if it's garbage, I can just throw it away and not care too much about it. In Symphony, there's this like rework state where once the PR is proposed and it's escalated to the human for review, it should be a cheaper view. It is either mergeable or is not. And if it's not, you move it to rework. The elixir service will completely trash the entire work tree and PR and start it again from scratch. And this is that opportunity again to say, why was it trash? Right. What did the ACG do? Yeah, yeah. Yeah. Fix that before moving the ticket to progress again. Yeah. Why is this not in the codex app? I guess it's you guys are ahead of codex app. Yeah. So the way the team has been working is basically to be as AI pilled as possible and spread the head. And a lot of the things we have worked on have fallen out into a lot of the products that we have. Like we were in deep consultation with the codex team to have the codex app be a thing that exists, right? To have skills be a thing that codex is able to use. So we didn't have to roll our own to put automations into the product. So all of our automatic refactoring agents didn't have to be these hand rolled control loops. It has been really fantastic to be in a way unanchored to the product development of frontier and codex and just very quickly try to figure out what works and then later find the scalable thing that can be deployed widely. It's been a very fun way to operate. It's certainly chaotic. I have lost track very often of what the actual state of the code looks like because I'm not in the loop. There was one point where we had wired playwright directly up to the electron app with MCP. MCPs I'm pretty bearish on because the harness forcibly injects all those tokens in the context and I don't really get a say over it. They mess with auto compaction. The agent can forget how to use the tool. There's probably only what three calls in playwright that I actually ever want to use. So I pay the cost for a ton of things. Somebody vibed a local daemon that boots playwright and exposes a tiny little shim CLI to drive it. And I had zero idea that this had occurred because to me I run codex and it's able to go to the other. Yeah. Like no knowledge of this at all. So we have had like in human space to spend a lot of time doing synchronous knowledge sharing. We have a daily stand up. That's 45 minutes long because we almost have to fan out the understanding of the current state. I was going to say this is good for a single human multi agent but multi human multi agent is a whole like poly explosion of stuff. Yeah. And this is fundamentally why we have such a rigid like 10,000 engineer level architecture in the app because we have to find ways to carve up the space so people are not trampling on each other. Sorry. I don't get the 10,000 thing. Did I miss that? The structure of the repository is like 500 npm packages. It's like architecture to the access for what you would consider I think normal for a seven person team. But if every person is actually like 10 to 50, then the numbers on being super, super deep into decomposition and sharding and like proper interface boundaries make a lot more sense. Yeah. To me, that's why I talked about micro fundings and an access from that world. But cool. It's just coming back to this. I don't know if you have other thoughts on orchestrating so much work, going through this. Is this enough? Is this like any aha moments? It'll be interesting to see like where. Okay. So right now you pick linear as your issue tracker, right? Or is it actually linear? This is actually linear. Oh, that's linear. It's linear. Oh, I never look at it. Video. The demo video I had to download to run. So I, because I'm a slack maxi, but linear is also really good. Yes. We do make a good use of slack. We, we fire off codecs to do all these. We should. Loatheous. Lastly, fix ups, the things that like sink that knowledge into the repository. It's super cheap. Yeah. Do it in codecs. My biggest plug is over the eye needs to build slack. You need to own slack, build yours. The turn this into slack. I did read. You don't know how to do this. Yeah. I would say that if we think that we want these agents to do economically valuable work, which is like, this is the mission, right? We want AI to be deployed widely to do economically valuable work. Then we need to find ways for them to naturally collaborate with humans, which means collaboration tooling, I think is an interesting space to explore. Yeah, totally. Yeah. Get out slack linear. Yeah. That was kind of my thing. Okay. Where do we see right now codecs has started codex model then CLI and others and app app can let me shoot off multiple codecs is in parallel, but there's no great team collaboration for codecs. And it seems like your team had some say into what comes out, right? So you talk to them codex kind of was a thing from there. If you guys are on the bound, what stuff that like you might not focus on, but what do you expect other people to be building? All right. So people that are like five X 50 X thing, should you build stuff that's like very niche for your workflow for your team? Should it be more general so other people can adopt this or niche there? Because because part of it is just, okay, is everything just internal tooling? Do we have everything our own way? Like the way our team operates has our own ways that we like to communicate or is there a broader way to do it? Is it something like an issue tracker? Just thoughts if you want to reform that. I think TBD, we have not figured this out in a general way. I do think that there is leverage to be had in making the code and the processes as much the same as possible. If you think that code is context, code is prompts, it's better from the agent behavior perspective to be able to look in a package in directory XYZ and it not to have to page so deeply into directory ABC because they have the same structure. Use the same language. They have the same patterns internally. And that same like leverage comes from aligning on a single set of skills that you're pouring every engineers taste into to make sure that the agent is effective. So like in our code base, we have I think six skills. That's it. And if some part of the software development loop is not being covered, our first attempt is to encode it in one of the existing set of skills, which means that we can change the agent behavior more cheaply than changing the human driver behavior. Yeah. Have you ever experimented with agents changing their own behavior? We do. Yeah. A parent agent changing a sub agent's behavior is something that we have some bits for skill distillation. So for example, there's one neat thing you can do with codex, which is just point it at its own session logs to ask it to tell you how you can use the tool better. It's like introspection or ask it to do things better. So what can I do this session better? What skills should I hire? I like the modification of you can just do things to you can just ask agent to do things. Yeah, you can just codex things. This is like a silly emoji that we have. You can just codex things. You can just prompt things. It's really glorious future. We live in. But okay, you can do that one on one. But we're actually slurping these up for the entire team into blob storage and running agent loops over them every day to figure out where is a team? Can we do better? And how do we reflect that back into the repository? Yeah, though, everybody benefits from everybody else's behavior for free. Same for like PR comments, right? These are all feedback. That means the code as written deviated from what was good. A PR comment, a field build. These are all signals that mean at some point the agent was missing context. We've got to figure out how to slurp it up and put it back in the repo. By the way, I do this exactly right. I use cloud code for mortgage work. Cloud code work is like a nice product. Yes. And I think you would agree. I always have it tell me what do I do better next time? And that's the meta programming reflection thing. So I was thinking like you have six reflection extraction levels in symphony. And almost like the zero of layer. So the six levels are policy, configuration, coordination, execution, integration, observability. We've talked about a couple of these, but the zero of layer is like the, okay, are we working well? Can we improve how we work? Yes. Can I modify my own workflow with MD or something? I don't know. Yeah, of course. Yeah. Of course you can. Like this thing is also able to cut it's own tickets because we give it full access. Yeah. Make it a ticket to have it cut tickets. You can put in the ticket that you expected to file it's own follow up work. Self modifying. Yeah. Yeah. Put, don't put the agent in a box. Give the agent full accessibility over its domain. I had a mental reaction when you said don't put the agent in a box. So I think you should put it in a box. Like it's just that you're giving the box everything it needs. Yeah. Context and tools. But we're like as developers were used to calling out to different systems. But here you use the open source things like the Prometheus, whatever, and you run it locally so that you can have the full loop. I assume. Yep. I think like another thing. You want to minimize cloud dependencies. You also want to make sure that you think about what the agent has access to. What does it see? Does it go back in the loop? Like from the most basic sense of you let it see its own like calls, traces, it can determine where it went wrong. Are you feeding that back in? So, you know, just the most basic level of you want to see exactly what's input output like does the agent have access to what is being outputted, right? It can self improve a lot of these things. It's all text, right? My job is to figure out ways to funnel text from one agent to the other. It's so strange. Like way back at the start of this whole AI wave, Andre was like English is the hottest new programming language. It's here. It's here. Yeah. The feature is yeah. Okay. Like a lot of software, a lot of stuff. There's a GUI. It's made for the human. We're seeing the evolution of CLIs for everything. Right. All tools have CLIs here. I just can use them well. Do we get good vision? Do we get good little sandboxes? Like right now it's a really effective way. Right. Models love to use tools. I love to bass. I love to read through text. So slap a CLI, let it go loose. That works for everything. That does. Yeah. We've also been adapting non-textual things to that shape in order to prove model behavior in some ways. Right. We want the agent to be able to see the UI. Agents do not perceive visually in the same way that we do. They don't see a red box. They see red box button. Right. They see these things in late in space. So if we want to have a thing every day goes off every time you say it in space. Ding. Anyway, if we want to actually make it see the layout, it's almost easier to rasterize that image to ask York and feed it in to the agent. Ha. And there's no reason you can't do both, right? To like further refine how the model perceives the object it's manipulating. Cool. Can we, you want to talk about a couple more of these layers that might bear more introspection or that you have personal passion for? I will say that the coordination layer here was a really tricky piece to get. Right. Let's do it. Yeah. I'm all about that. And this is temporal square thing. This is where when we turn the spec into elixir where like the model takes a shortcut, right? It like, oh, I have all these primitives that I can make use of in this lovely runtime that has native process supervision, which is, I think a neat way to have taken the spec and made it more accessible by making choices that naturally map the domain. Right. In the same way that like you would prefer to have a TypeScript model repo if you are doing full stack web development, right? Because the ability to share types across the front and back end reduces a lot of complexity. And because... This is what GraphQL used to be. That's right. And I don't know if it's still alive, but it's... No humans in the loop here. So like my own personal ability to write or not write elixir doesn't really have to bias us away from using the right tool for the job. It is just wild. Love it. I love it. Yeah. I wonder if any languages struggle more than others because of this. I feel like everyone has their own abstractions that would make sense, but maybe it might be slower, it might be more faulty, where like you'd have to just kick the server right on then. I don't know. I think observable layer is really well understood. Integration layer, MCP is dead. I think all these are just like a really interesting hierarchy to travel up and down. It's common language for people working on the system to understand. The policy stuff is really cool, right? Yeah. You don't really have to build a bunch of code to make sure the system waits for the ID to pass. It's your institutional knowledge. Yeah, you just give it the GHCLI with some text that says, CLI has to pass. It makes the maintenance of these systems a lot easier. Do you think that CLI maintainers need to be do anything special for agents or just as is it's good? Because I don't think when people made the GHCLI, they anticipated this happening. That's correct. The GHCLI is fantastic. It's great. Super industry. Everyone go try GH repo, create GH pull, and then prove it with Christ number. Right. GHPR, like 153, whatever, and then they like pulls. Basically, my only interaction with the GitHub web UI at this point is GHPR view dash web. Exactly. Clance at the diff. And be like, sure thing, send it. But the CLIs are nice because they're super token efficient and they can be made more token efficient really easily. Like, I'm sure you all have seen like I go to build kite or Jenkins and I just get this massive wall of build output. And in order to unblock the humans, your developer productivity team is almost certainly going to write some code that parses the actual exception out of the build logs and sticks it in a sticky note at the top of the page. And you basically want CLIs to be structured in a similar way. Right. You're going to want to pass dash silent to prettier because the agent doesn't care that every file was already formatted. He just wants to know it's either formatted or not. So it can then go run a right command. Similarly, like in our PNPM distributed script runner, when we had one, when you do dash recursive, like it produces a absolute mountain. Of text, but all of that is for passing test suites. So we ended up wrapping all of this in another script to suppress the which you can vibe the one channel we output the failing parts of the test. You make a pipe errors versus the standard standard out. I don't know. Okay, whatever. Tim was thinking. That's the CLI. I used to maintain a CLI for my company. And yeah, this is like, quote, very close to my heart. But you're vibing my job. That's right. Cool. Any other things? This is a long spec. I appreciate that. It's got a lot of strong opinions in here. Any other things that we should highlight? I think obviously you can spend the whole day going through some of these, but I do think that some of these have a lot of care or some of this you might want to tell people, hey, take this, but make it your own. Fundamentally, software is made more flexible when it's able to adapt to the environment in which it is deployed, which means that things like linear or GitHub even. Are specified within the spec, but not required pieces of it. There's like a more platonic ideal of the thing that you could swap in, like JIRA or Bitbucket, for example. But being able to tightly specify things like the ID formats or how the Ralph loop works for the individual agents basically means you can get up and running with a fully specified system quickly that you then evolve later on. I think we never intended for this to be a static spec that you can never change. It's more like a blueprint to get something worth starting by then running for you then to vibe later to your heart's content. You have like code and scripts in here where it's all I think this is a really good prompt. It's just a very long prompt. Fundamentally, the agents are good at following instructions. So give them instructions and it will improve the reliability of the result. We much like the way we use Symphony. We don't want folks to have to monitor the agent as it is vibing the system into existence. So being very opinionated, very strict around what the success criteria are means that our deployment success rate goes up. Yeah. Means we don't have to get tickets on this thing. It all goes back to that like code is disposable, right? Like early on when you had CLI through you'd kick off a codex run, it would take two hours. You would want to monitor. Okay, I'm in the workflow of just using one. I don't want to go down the wrong path. I'll cut it off and just shoot off for like that was my favorite thing of the codex app, right? Just for exit, like it's okay. One of them will probably be right. One of them might be better. Stop overthinking it. Like my first example is probably like deep research. When you put out deep research and I'd ask it something like I asked it something about LLM. It thought it was legal something and spent an hour came back with a report completely off the rails. And I was like, okay, I got to monitor this thing a bit. No, don't monitor it. Just you want to build it so it goes the right way. And you don't want to sit there and babysit, right? You don't want to babysit your agents. With that deep research query that you made, looking at the bad result, you probably figured out you needed to tweak your prompt a bit, right? That's that guardrail that you fed back into the code base for the ask your prompt to further align the agent's execution. Same sort of concept supply there too. When you talk, how are the customers feeling? For symphony, I think we have none, right? This is a thing we have put out into the world. Symphony is internal, right? As long as you're happy, you're the customer. That's right. Just let's see an external view. I'd say folks are very excited about this way of distributing software and ideas in cheap ways. For us as users, it has again pushed the productivity 5x, which means I think there's something here that's like a durable pattern around removing the human from the loop and figuring out ways to trust the output. The video that has shared here is the same sort of video we would expect the coding agent to attach to the PR that is created. That's part of building trust in the system. And that's to me, fundamentally what has been cool about building this is it more closely pushes that persona of the agent working with you to be a teammate. I don't shoulder surf you for the tickets that you work on during the week. I would never think that I would want to do that. I wouldn't want a screen recording of your entire session in cursor or cloud code. I would expect you to do what you think you need to do to convince me that the code is good and mergeable and compress that full trajectory in a way that is legible to me, the reviewer. It's just, you can just do that because codex will absolutely sling some of that. You can take it around. It's great. FFM peg is the OG like God CLI. Yeah, Swiss army chainsaw. I used to say there's a SAS micro SAS that's called it in every flag did FFM peg. Oh, for sure. Right. I mean, for sure. Just post it as a service, put a UI on it. People who don't know FFM peg will pay for it. When we were first experimenting with this, it was a wild feeling to be at the computer with just like windows just popping up all over the place and getting captured and files appearing on my desktop. Like very much felt like the future to have a thing controlling my computer for like actual productive use. Like I'm just there keeping it like a weight jiggling the mouse every once in a while. That's why there's some office workers do so that they buy a mouse jiggler. That's right. One thing I'd ask. Okay. As stuff is so code is disposable async shoot off a bunch of agents. One question is, okay, are you always like a extra high thinking guy and where do you see spark? So 5.3 spark. There's a lot of me wanting to make quick changes. I'm not going to open up ID. I'm not going to do anything, but I will say, okay, fix this little thing, change a line, change a color. Spark is great for that. But am I still the bottleneck? Like why don't I just let that go back and like just riff on that? Is there? Spark is such a different model compared to the extra high level reasoning that you get in these five to be for people. It is a different model, different architecture, different like doesn't support it. It's incredibly fast. Smaller model. I have not quite figured out how to use it yet, to be honest. I use faster. I was adapting it to the same sorts of tasks I would use X high reasoning for. Yeah, I know. And it would blow through three compactions before writing a line of code. And that's another big thing with 5.4, right? Million coking context, which is huge in agentics, right? Like you can just run for longer before you have to compact. The more tokens you can spend on a task before compacting, like the better you'll do. That's right. That's right. I'm not sure. How to deploy Spark. I think your intuition is right that it's very great for spiking out prototypes, exploring ideas quickly, doing those documentation updates. It is fantastic for us in taking that feedback and transforming it into a lint where we already have good infrastructure for ES lints in the code base. These sorts of things it's great at. And it allows us to unblock quickly doing those like antifragile healing tasks in the code base. Yeah, that makes sense. So you're pushing, you guys are pushing models to the freaking limit. What can cart models not do well yet? They're definitely not there on being able to go from new product idea to prototype. Single one shot. This is where I find I spend a lot of time steering is translating and state of a mock for a net new thing, right? Thing no existing screens into product that is playable with. Similarly, while this has gotten better with each model release, like the gnarliest refactorings are the ones that I spend my most time with, right? The ones where I am interrupting the most, the ones where I am now double clicking to build tooling to help decompose monoliths and things like that. This is a thing I only expect to get better right over the course of a month. We went from the low complexity tasks to like low complexity and big tasks in both these directions. So this is what it means to not bet against them all. You should expect that it is going to push itself out into these higher and higher complexity spaces. So the things we do are robust to that. It just basically means I'll be able to spend my time elsewhere and figure out what the next bottleneck is. I do think it's also a bit of a different type of task, right? Codex is really good at code base understanding, working with code bases, but companies like Lovable, Bolt, Replet, they solve a very different problem scaffold of zero to one, right? The idea of the product and it's there, there are people working on that. Models are also pushing like step function changes there. It's just different than the software engineering agent today, right? Like I said, the model is isomorphic to myself. The only thing that's different is figuring out how to get what's in here into context for the model. And for these white space sort of projects, I myself, I'm just not good at it, which means that often over the agent trajectory, I realize the bits that we're missing, which is why I find I need to have the synchronous interaction. And I expect with the right harness, with the right scaffold, that's able to tease that out of me or refine the possible space, right? To be super opinionated around the frameworks that are deployed or to put a template in place, right? These are ways to give the model all those non-functional requirements, that extra context to anchor on and avoid that wide dispersion of possible outcomes. Thank you for that. I want to talk a little bit about Frontier. Yeah, sure. Overall, you guys announced it maybe like a month ago, and there's a few charts in here and this is basically like your enterprise offering is what I view it. Is there one product or is there many? I can't speak to the full product roadmap here, but what I can say is that Frontier is the platform by which we want to do AI transformation of every enterprise and from big to small. And the way we want to do that is by making it easy to deploy highly observable, safe, controlled, identifiable agents into the workplace. We want it to work with your company native IM stack. We want it to plug into the security tooling that you have. We want it to be able to plug into the workspace tools that you use. So you're just going to be stripping specs, right? We expect that there will be some harness things there. Agents SDK is a core part of this to enable both startup builders as well as enterprise builders to have a works by default harness that is able to use all the best features of our models from the shell tool down to the codex harness with file attachments and containers and all these other things that we know going to building highly reliable complex agents. We want to make that great and we want to make it easy to compose these things together in ways that are safe. For example, right, like the GPT OSS safeguard model, for example, one thing that's really cool about it is it ships the ability to interface with a safety spec. Safety specs are things that are bespoke to enterprises. We owe it to these folks to figure out ways for them to instrument the agents and their enterprise to avoid exfiltration in the ways they specifically care about to know about their internal company code names, these sorts of things. So providing the right hooks to make the platform customizable, but also mostly working by default for folks is the space we are trying to explore here. Yeah. And this is the snowflakes of the world just need this, right? Yeah. Brexit of the world stripes. Yeah. Makes sense. I was going to go back to your thing. The demo videos that you guys had was pretty illustrative. It's like also to me, an example of very large scale agent management. Yes. Like you give people a control dashboard that if you play any one of these like multiple agent things, you can dig down to the individual instance and see what's going on. Yes, of course. Well, who's the user? Is it like the CEO, the CTO, CIO, something like that? At least was my personal opinion here. The buyer that we're trying to build product for here is one and employees who are making productive use of these agents, right? That's going to be whatever surfaces they appear in the connectors they have access to things like that. Something like this dashboard is for IT, your GRC and government's folks, your AI innovation office, your security team, right? The stakeholders in your company that are responsible for successfully deploying into the spaces where your employees work, as well as doing so in a safe way that is consistent with all the regulatory requirements that you have and customer attestations and things like that. So it is a iceberg beneath the actual and it's great. Yeah. You jump every layer in the UI is like going down the layer of extraction in terms of the agent, right? Yep. Yeah, I think it's good. Yeah. The ability to dive deep into the individual agent trajectory level is going to be super powerful, not only for from like a security perspective, but also from like someone who is accountable for developing skills. One thing that was interesting that we also blogged about shipping was an internal data agent, which uses a lot of the frontier technology in order to make our data ontology accessible to the agent and things like that to understand what's actually in the data warehouse. Yeah, semantic layer. Yes. Type things. I was pretty sure part of that in that world. Is this all? I don't know. It's actually really hard for humans to agree on what revenue is. Yes. Yes. What is there is like what is an active user? There's what five data scientists in the company that have defined this golden career. They are different, yeah. And no, and there's also internal politics. Yes. As the attribution of I'm marketing, I'm responsible for this much and sales is responsible for this much and they all add up to more than a hundred. And I'm like, you guys have different definitions. And if you're a startup, everything is ARR. So I think that's cool. Oh, you guys blog about this. Okay. I didn't see this. Yeah. Is this the same team? I don't know. This is what you're referring to? Yes. Okay. Well, some people read this. This is our data agent. A lot of people are reading this one. Yeah. I don't know if you have any highlights. I'm general from the point of this. A lot of good things to read. Yeah. Yeah. Lots of homework for people. No, but like data as the feedback layer, you need to solve this first in order to have the products feedback loop closed. That's right. So for the agents understand. And this is not something that humans have not solved this. Like in this is how you build that artists that do more than coding, right? Yeah. Yeah. To actually understand how you operate the business. Yeah. You have to understand what revenue is, what your customer segments are, what your product lines are. Like one thing that's in moving back to the code base that we described here for harnessing. One thing that's in core beliefs.md is who's on the team. What product we're building, who our end customers are, who our pilot customers are, what the full vision of what we want to achieve over the next 12 months is. These are all bits of context that inform how we would go about building the software. Oh my god. So we have to give it to the agent too. I'm guessing that stuff is like pretty dynamic and it changes over time too, right? Like part of it was it's not just a big spec. You have it as one of the things and it will iterate. One thing that I think is going to break your mind even more is we have skills for how to properly generate deep fried memes and have reagi culture and Slack because with the Slack chat GPT app that you're able to use and codex, like I can get the agent to shitpost on my behalf. Amazing. It's just it's part of the humor. Humor is part of AGI. Is it funny? It is pretty good. Yeah. Okay. Yeah. It's pretty good at making deep fried memes. So a lot of, I think humor is like a really hard intelligence test, right? It's like you have to get a lot of context into like very few words. This is one of the references. 5.4 is such a big uplift for our studio more. It's the meme. Yeah, for sure. Yeah. Yeah. It's very cool. So 5.4 can chip us. Yeah. Maybe when y'all are done here today, ask codex to go over your coding agent sessions and to roast you. Love it. I'll give it a shot. Yeah, give it a shot. Coming back to the final point I wanted to make is yeah, I think that there are multiple other like you guys are working on this. But this is a pattern that every other company out there should adopt. Yeah. Regardless of whether or not they work with you. To me, I saw this, I was like, fuck, every company needs this. This is multiple billions of dollars. This is what it takes to get people to yes. Yeah, actually realize the benefits and distribute the energy outage right here. And it's, I think it sounds boring to people like, oh, it's for safeguards and whatever. But I think you to handle agents at scale like you're envisioning here. I don't know if it's like a real screenshot or like a demo, but this is what you need. This is my original sort of view of what temporal was supposed to be that you built this dashboard and you basically have every long running process in the company in one dashboard. And that's it. That's right. Yeah, I think it's pretty customized towards every enterprise. Right. Like you care about different things. There's a lot of customization, but there'll be multiple unicorns just doing this as a service. I'm like a very frontier pill if you can't tell. Amazing. It only clicked because obviously this came out first then harness and then symphony. And the only click for me that like, this is actually the thing you ship to do that. Yeah. Yeah. There's a set of building blocks here that we assembled into these agents. And the building blocks themselves are part of the product, right? The ability to steer, revoke authorization if a model becomes misaligned. Like all of this is accessible through Frontier. And there's going to be a bunch of stakeholders in the company that have the things they need to see in the platform. Yeah. They had to yes. So we'll build all those in the Frontier so that we can actually do the widespread deployment. That's the fun part. I'm also calling back to, there's this like levels of AGI. I don't know if opening, I still talking about this, but they used to talk about five levels of AGI. And one of it was like, oh, it's like an intern, the coding software agent. At some point it was AI organization. And this is it. That's right. Like this is level four or five. I can't remember which level, but it's somewhere along that path was this. You know how I mentioned that my team is having fun sprinting ahead here? And we do this thing where we're collecting all the agent trajectories from codecs to slurp them up and distill them. This is what it means to build our team level knowledge base. Happen to reflect it back into the codebase, but it doesn't have to be that way. And it doesn't have to be bound to just codecs. I want chatGBT to also learn Armenian culture and also the product we are building and how. So that when I go ask it, it also has the full context of the way I do my work. And I'm super excited for Frontier to enable this. Yeah. Amazing. What did the model people say when they see you do this? Like you have a lot of feedback. Obviously you have a lot of usage. You have a lot of trajectories. I don't imagine a lot of it's useful to them, but some of it is. You have this too. You deploy a billion tokens of intelligence a day. And this was at the beginning of 2016. Yeah. Cooking. Yeah. There's this fundamental tension, which I think you have talked about between whether or not we invest deeper into the harness or we invest deeper into the training process to get the model to do more of this by default. Yeah. And I think success for the way we are operating here means the model gets better taste because we can point the way there. And none of the things we have built actively degrade Asian performance because really all they're doing is running tests. And like running tests is a good part of what it means to write reliable software. If we were building an entire separate rust scaffold around Codex to restrict its output, that I think would be like additional harness that would be prone to being scrapped. But if instead we can build all the guardrails in a way that's just native to the output that Codex is already producing, which is code, I think no friction with how the model continues to advance, but also just good engineering. And that's the whole point. Yeah. So I've had similar discussions with research scientists where the RL equivalent is on policy versus off policy. Yeah. And you're basically saying that you should build an on policy harness, which is already within distribution and you modify it from there. But if you build it off policy, it's not that useful. That's right. Super cool. Any thoughts, any things that we haven't covered that we should get out there? I've been super excited to benefit from all the cooking that the Codex team has been doing. They absolutely ship relentlessly. This is one of our core engineering values, ship relentlessly. And the team there embodies it to extreme degree. Yeah. I have five, three, and then Spark and five, four come out within what feels like a month is just a phenomenally fact. This is exactly a month that goes five, three. And yesterday was five, four. Yeah. I mean, do we have every month now is five, five next thing. I can't say that the poly markets would be very upset. I think it's interesting that it's also correlated with the growth. They announced there's two million users, but like almost don't care about Codex anymore. This is it. This is the game. And it's like coding cool, soft, like knowledge work. That's right. This is the thing to chase after. And this is one of the things that my team is excited to support. Get the whole like self hosted harness thing working, which you have done and like the rest of us are trying to figure out how to catch up, but then do things. You know, right with you do things. That's right. You can just do things. That's the line for the episode. That's it. Any other call to actions? You're based in Seattle. Your team. I'm guessing new Bellevue office, new Bellevue office. We just had the grand opening yesterday as of the recording date, which was fantastic. Beautiful building. Super excitedly part of the Bellevue community building the future in Washington. And I would say that there is lots of work to be done in order to successfully serve enterprise customers here in frontier. We are certainly hiring. And if you haven't tried the codex app yet, please give it a download. We just passed 2 million weekly active users growing at a phenomenally fast rate, 25% week over week. Come join us. Yes. I think that's an interesting my final observation. Open AI is a very San Francisco centric company. I know people who have been, who turned down the job or didn't get the job because they didn't want to move to SF. And now they just don't have a choice. You have to open the London. You have to open the Seattle. And I wonder if that's going to be a shift in the culture. Obviously you can't say, but. I was one of the first engineering hires out of our Seattle office. Yeah. Seattle is very natural. Its success has been part of what I have been building toward and it is, it has grown quite well. Right. We have durable products and lines of business that are built out of there. Ton of zero to one work happening as well, which is the core essence of the way we do applied AI work at the company to sprint after it, new to figure out where we can actually successfully deploy the models. Yes. 100%. We also have a New York office too that has a ton of engineering presence. Yeah. Yeah, exactly. That's the visa of my roadmaster. Yeah. Wherever people hire engineers, I will go. That's right. That's right. It's a cool office too. New York is the old REI building. I believe the REI office. It's just, no, you will never be as big. New York is, you can't get the size of office that they need. The New York office. Seattle has a very. Office. Mad men and five. It's beautiful. The Bellevue one is very green, gold fixtures, very Pacific Northwest is very cool. It's a little bit local. Yeah. A lot of people are like, therefore people like New York. They want to be in New York. Right. Yeah. Yeah. Well, we have a fantastic workplace team that has been building out these offices. It really is a privilege to work here. Yeah. Excellent. Okay. Thank you for your time. You've been very generous and you've been cooking. So let me let you get back to cooking. It's been amazing chatting with you folks. Happy Friday. Happy Friday.