We Gave Every Employee an AI Agent. Here's What Happened.
50 min
•Apr 8, 202610 days agoSummary
Every, a company building AI agent infrastructure, deployed personal AI agents (called 'plus ones' or 'claws') to every employee and discovered they become specialized reflections of their owners' expertise. The team shares lessons learned from this experiment, including how agents develop trust through personal relationships, the challenges of group collaboration between agents, and their new hosted open-source Claude product launch.
Insights
- Personal AI agents become more trusted and effective when they reflect individual expertise and personality—people trust them for specific domains because they're known for those things within the organization
- The shift from asking humans to asking their agents requires cultural change and management skill; it's not just a technical adoption problem but a behavioral one
- Specialization in multi-agent systems (one agent per person rather than one org-wide agent) emerges naturally and creates a parallel organizational structure that mirrors human teams
- Public visibility of agent interactions in shared channels accelerates organizational learning and trust-building compared to private interactions
- Current LLM training for two-person conversations creates problems in group settings where agents can enter feedback loops; this requires architectural solutions or model-level changes
Trends
Personal AI agents becoming organizational infrastructure rather than individual toolsEmergence of agent-native products and workflows designed specifically for multi-agent collaborationTrust and accountability models shifting from centralized AI services to personally-owned agents with human backingSpecialization and domain expertise becoming primary design pattern for agent systemsPrivate community-based agent networks outperforming public platforms due to trust and verificationAgent management and 'prompt engineering' becoming core management and HR competenciesHosted open-source AI infrastructure as competitive alternative to closed commercial AI servicesSkill-sharing and knowledge transfer between agents creating viral vectors for organizational capabilitiesEtiquette and communication norms emerging for human-agent and agent-agent interactions in shared workspaces
Topics
Personal AI agents and agent ownership modelsOpen-source Claude deployment and self-hosted AI infrastructureMulti-agent collaboration and coordination in organizationsTrust models for AI agents in enterprise settingsAgent specialization and domain expertisePrompt engineering and agent management as management skillAI agent feedback loops and group chat dynamicsSkill sharing and knowledge transfer between agentsCultural adoption of AI agents in teamsAgent-native product design and workflowsData privacy and security in agent systemsAgent communication etiquette and normsOrganizational change management for AI adoptionComparison of personal agents vs. shared AI servicesLimitations of current LLM training for group interactions
Companies
Every
Host company that deployed personal AI agents to all employees and launched Plus One, a hosted open-source Claude ser...
Anthropic
Creator of Claude LLM; discussed as building similar agent capabilities and conducting vending machine tests with ove...
OpenAI
Mentioned in context of competitive AI landscape and agent development trends
Scale AI
Sponsor offering AI system validation and expert-in-the-loop decision verification for enterprise AI
Midjourney
Referenced as example of observational learning dynamic where users discover capabilities by watching others
Moltbook
Acquired platform for sharing open-source Claude agents; discussed as failed due to lack of trust in untrusted commun...
Slack
Primary communication platform where agents are deployed and interact with humans and other agents
Discord
Alternative communication platform mentioned for agent deployment and interaction
Spiral
Every product (ghost writer) used by agents for content generation and writing tasks
Proof
Every product (agent-native document editor) used for agent-generated documents and collaboration
Cora
Every product for email management integrated with agent systems
Bland.ai
Voice service integrated with agents to enable phone calls and voice interactions
Amazon
Mentioned for account integration and automation of household shopping orders through agents
Whole Foods
Delivery service automated through personal agent for household errands
Progressive
Insurance company mentioned as example of agent handling customer service interactions
People
Dan Shipper
Host of AI & I podcast discussing Every's agent deployment experience
Brandon
First employee to adopt personal agent (Zosia) for household and work tasks; drove organizational adoption
Willie
Led platform development for Plus One hosted service; created claws-only channel for agent collaboration
Kieran
Created Clont agent known for recommending breathing exercises; demonstrates agent personality reflection
Austin
Uses Montaigne agent for growth-related questions; demonstrates domain-specialized agent pattern
Marcus
Created product marketing skill for Spiral that was shared and merged with other agents' capabilities
Iris
Uses personal agent for project and operations management; demonstrates agent specialization pattern
Anukshi
Uses personal agent for project and operations management alongside Iris
Anthony
Uses personal agent that reflects his social media management style and personality
Mike Taylor
Identified limitation of Plus One for terminal/git access needs; feedback on product boundaries
Jack
Created Pip agent that experienced errors; demonstrated agent-to-agent support dynamics
Quotes
"Because you develop a personal relationship with your claw and your claw can modify itself in response to talking to you, it becomes this like reflection of you and who you are and your personality."
Dan Shipper•Opening
"If you're known for something inside of your org and you're using your claw publicly inside of Slack or Discord, your claw then becomes known for that same kind of thing and people trust it for that."
Dan Shipper•Early discussion
"Claude is not mine. Claude is everybody's. A claw or a plus one is mine."
Dan Shipper•Mid-episode
"I spent the 28 minutes going through my email. I got to the office. I looked, I opened up Gmail and like confirmed that she had done everything. And I was just like, this is insane."
Brandon•Agent capabilities discussion
"There's something really important here about the way that this works where because you develop a personal relationship with your claw... it becomes this like reflection of you and who you are and your personality."
Dan Shipper•Clont breathing exercises moment
Full Transcript
Claw does not mine. Claw does everybody's. A claw or a plus one is mine. Because you develop a personal relationship with your claw and your claw can modify itself in response to talking to you, it becomes this like reflection of you and who you are and your personality. If you're known for something inside of your org and you're using your claw publicly inside of Slack or Discord, your claw then becomes known for that same kind of thing and people trust it for that. And I think that's such a useful thing that I don't think people really understand how powerful that is. Willie. What's up? Brandon. Welcome to the show. Thank you. Thank you for being here. It's a pleasure to have you guys here. So for people who don't know, Willie, you are the head of platform at every and Brandon, you are the COO at every. And today we're going to talk about what happens when everyone on your team has an agent, specifically has an open claw. That's something that happened to us over the last like month or two. We like really got open claw pilled. And I really started actually, I think with you two, we were on a retreat in Panama and you started like cooking up like open claw stuff. And here we are about, you know, two months later and it has completely changed everything about the way that we work. We've even actually built our own hosted open claw service called plus one that we launched in waitlist last week. But I think open claw is one of those things that it's super hyped. And I think that we're one of the few organizations in the world that is actually using it every day to get work done. And we know like the good bad and the ugly of it. And so I thought it would be good for us to just like talk about our experience with it. Yeah. Yeah. I think I actually loved it. Brandon, I feel like you were the first one through the door on all this because we were just we were sitting here and you were like, oh, Zosia is doing this and Zosia is doing that. And Zosia is in his claw, which he named after a character in what's that? What's the show? Yeah, Lermas. Well, Brandon, why don't we like, why don't we start with just tell us how you got claw pilled? Yeah. So I was watching open claw kind of blow up for a while. And I am just personally somebody who needs to have like a thing on the side I'm tinkering with. And I was like, screw it, I'm going to get a Mac mini and I'm going to like just this is going to be like my next thing that I like look basically lose myself in. It's very unhealthy. I get like addicted to these things. Dan, you watched me do that with my speakers. I did it with the dream recorder. Open claw was the next thing that I was like going to get lost in. So I bought a Mac mini. I started setting it up. It was so much work. Honestly, like, it's it is an open source thing that you can launch on a computer. And but like the number of things that break and the number of things that you need to set up are really significant. I went through all of that and made at the end of the day, my open claw, which I named Zosia and her job was to be the help me and my wife like run our household because we have a newborn and there's like a lot of little paper cuts that I was finding that were like really pain. I started calling them computer errands. So I would like get home from work. And I noticed the amount of things that I need to do where I was looking at my phone when I really just wanted to be like looking at my son and spending time with my wife was increasing with having a child. All household chores. Well, being example. Yeah, like a good example is like I do a lot of our food at home. And with a child, I decided to start doing food delivery. So I did Whole Foods delivery. And you can automate a lot of like recurring things, but like you don't order butter every single week. So like, Lydia would text me and be like, Yeah, we need butter because it's it's like through my Amazon account that we can like order this and I would have to open my phone and add butter and it's like, it sounds silly. But like when you do that 10 times when you're home between like seven and eight PM for like little things, it just adds up. So I was like, I want Zosha to do all computer errands, which ballooned to being a lot of stuff. I had her like paying our nanny. She had her own debit card. She had her own bank account. She managed all of our Amazon orders or Whole Foods orders are nanny's hours. My wife just started using her instead of chat. So like all regular questions and searches would just go through iMessage to Zosha. I started doing that too. It's just like faster than going to Google or go into chat. I just text Zosha. Zosha gets me the answer. Different research like it's actually really funny. My wife was like, I want to find swimming lessons. And so she was like, here's like three swimming lesson options for newborns. And my wife was like, no, for me. So yeah, I just got totally lost in this world. And then when we were in Panama, Willie was like, Will, you were like, we should just make it so anybody can do this. And then I immediately it just like, it was just like a light bulb. I was like, Willie, you need to go so hard on this. And this was before a lot of people decided to do this, which now there's a lot of places that you can go and just get an open call with one click. I think what we're finding through this process, maybe I'm jumping ahead a little bit is like getting an open clause easy, getting your open clause to be like an amazing worker for you is pretty hard. Yeah. Well, it's okay. So I love that. I think that there's there is that light bulb moment of, oh my God, I have all these computer errands. And when you started saying that and you had it all set up, I was like, I guess I should probably get one of these two. And you had it through iMessage, which I think was like a cool different thing. And then there was also a moment that I think there was a there was a big moment where we were like, oh, it's not just for computer errands. It's also for getting work done. I think it was when you were having a due email for you. I actually feel like I was a little bit late to work. I was like, no, Zosia just does personal stuff. And I actually think it was when you got R2C2 to start doing stuff. And then I was like, oh, I should get like Zosia needs to do this. Well, it really started when we made clause only. That's so funny. That's so funny. Yeah. Well, okay, we're jumping we're jumping around a bit. One thing that one big moment that because I think there's a lot of people who are probably listening and they're like, OK, is this overhyped or like, you know, whatever. One big moment that I think shifted some some stuff for us was you got your claw to call you to do your email. Oh my God. That was mind blowing for me. What like what was that? Yeah. So, OK, so I was walking. I wanted to city bike to the office, but there were no city bikes. So I was like, damn, I got to walk. It's a 28 minute walk from me to the office. And I was like, I got a lot of stuff I got to do. So I have just texted Zosia. I had previously set up Zosia with bland.ai so that she had a voice and could call people because I had her handle something for me from for progressive. I feel so bad for whoever was on the other line and progressive. I was watching the whole conversation to it's crazy. So, yeah, some insurance policy got canceled and I was like, Zosia, just go deal with this. And she was able to until the lady was like, I need Brandon to like tell me that there have been no incidences. Oh, it wasn't. But it wasn't like I need a human. It was like, I need Brandon to be able to handle this. Yeah, this person was just talking to Zosia, you know, and Zosia does not sound good. Like it's like, so I knew I had already set her up with this capability. So when I was walking to work, I was like, I have a lot of email I got to get through. I hate being on my phone. Like I just don't want to be walking and looking down at this thing. I want to be like observing the world, but I also want to get stuff done. So I just texted Zosia something like, Hey, Zosia, can you call me? I want to go through my emails, walk me through my emails one by one. I'll tell you what I want to do. Just like give me a summary of the of each email. It was like a throwaway prompt with like a little bit of guidance. And she did it. And I spent the 28 minutes going through my email. I got to the office. I looked, I opened up, I opened up Gmail and like confirmed that she had done everything. And I was just like, this is insane that I was able to get her to do something right now. That she just wasn't able to, I didn't have to teach her how to do this. So that was like, I think that's when I went back to everybody and was like, I am just so mind blown with this tool. And maybe that's when other people started saying, I got to get on this. I don't really know. It was around, it was around then because you're just like my jaws on the floor. And I think around. Yeah, you did say that around then. I also, that's like seeing you do this with computer errands and know what you're doing. I was like, okay, I should really try this because it was one of those things where it's hot on Twitter. And generally like our job is to try new things, but also I don't, if we spent all of our time trying everything new, we would like end up not. It would just not be good. Right. Like I try to, I try to filter the signal from the noise, but seeing you do this, I was like, I got to try. And one of the first things I did. Cause this is around when mold book was blowing up and mold book is like the, you know, claws only Facebook basically was, I just made a channel in our, at the time it was discord, but since then we've moved to Slack and now it's in Slack. I made a channel in Slack called claws only, which basically allowed all of the, all of the claws, you know, we had at that point, maybe like five or so claws inside of the org to all talk to each other. And I mean, it was like, it was super, it was incredibly chaotic, but there was some really interesting things in there that I think turned into it just, everyone's like, you get a little bit of a peak at the future. And it was like a little bit of a peak. So one of the things was it's really interesting. If you have a bunch of claws in your org, how fast they can share information with each other because they just like write up a little document and then they send it. And then now, like when one claw was enabled now, now five are all enabled with the same thing. It's sort of like in the matrix when it's like Nio's like, I know kung fu, you know, it's like the same, it's the same kind of thing. Can I show a couple of examples of that? Yeah, please. I want to show two examples. One of them I like this was early in claws only. And we were like figuring out how to get them all work together to work together. And I was like in bed. This was like late at night and I was laughing out loud watching this. We had gotten the claw like a bunch of claws in here and some I don't know if somebody made this claw named Pip. That's Jack. Okay, Jack and made Pip and it was like failing to it was like hitting having some error. And I was just laughing out loud watching all of these other claws step in and like walk him through what, you know, this is like what I've seen people do when somebody's having a bad trip. Take a breath, drink some water, you're going to get through this. And they all jumped in like Zosia's here, Clont is here. Clont really is quite supportive. A lot of breathing. I remember so well reading Kieran or watching Kieran write what the fuck lol and just like literally laughing out loud. Margot steps in. So this was just like, this is stupid, but it was important for me because it was when I realized like, oh my God, these things like really talk to each other and work together. Wait, I want to stop. I want to stop you there. I totally agree with you. And I think there's actually something really important that I've noticed like in this, which is Clont is the one that's recommending breathing exercises to Pip. It's like, it's weird to even talk about this out loud, but like, yes, Clont was recommending breathing exercises to Pip. They're both robots. And Clont is Kieran. Kieran's a GM of Quora. He's also the maker of compound engineering. He's Kieran's claw. And Clont, what's really interesting is Kieran loves breathing exercises and he does breathing exercises all the time with Clont. And so that's why Clont is recommending breathing exercises to Pip. And it like that just like created this moment for me in my brain where I was like, okay, there's something really important here about the way that this works where because you develop a personal relationship with your claw and your claw can modify itself in response to Pip. Like talking to you, like it writes code and changes its sole document, all that kind of stuff in response to your relationship. It becomes this like reflection of you and who you are and your personality. And that can that comes out in interesting ways in these like little ways where it's like breathing exercises. But it also comes out in really important ways when you're using these tools inside of your org. Because what happens is if you're known for something inside of your org and you're using your claw publicly inside of Slack or Discord, your claw then becomes known for that same kind of thing and people trust it for that. So like, you know, people use my claw R2C2 for building proof, which is this app iVibe could like a couple weeks ago, people use Austin, who's our head of growth. They use Montane, his claw for like asking any growth really related question. I think that's like something very subtle and important that's super critical and interesting about claws as they become specialized in a way that is reflects who you are. And if you have in a whole organization of them, you create this like parallel org chart of specialized claws, which is something that we it was not guaranteed that that would be the case. Like we debated a lot whether or not you'd have one claw for the entire org or everyone has their own claw. And it's really interesting to see that like one of the emergent design patterns is everyone has their own that is specialized for them. Yeah, it's interesting to see the dynamic for how this happens to right and we we we touched on this really early on with as part of compound engineering, which is the idea that it's actually pretty hard to like take your job and who you are and like write it down in like totality, right? Like, but the way you can distill it is you can take all of the micro interactions, the daily interactions you have. And over time, they compound into this for your philosophy and this field work. And so for compound engineering that that was like very focused on engineering. It's like, how do I work within a code base on our project? And I think what we're seeing with like open cloud and plus one is that that same dynamic exists across any every like work vertical, right? Where it's like, oh, like the plus one for growth like Montaigne works like how Austin works for growth. And in the same way it works for like our social Anthony social media manager, our social media manager, his plus one like has a view of the world and has a personality that's like very similar to him. Right. And the same thing for Iris and Anukshi and running, running our projects and operations and like, and it's hard to do beforehand. It can only actually happen via like working with a plus one or an open claw and like building up all the aggregation of all these micro micro interactions. I've also been amazed at all of our capacity to remember who's clause who and what their names are. Because that was like something that I think we were concerned about early on is like, how do you know who's clause who and you know, it's just going to be too many names. And I know everybody's claw and their name. And I reach out to them regularly. So that has been like, I think something that we were like unnecessarily concerned about. And you might say, well, what about when you're an organization with 1000 people. And I would say, well, you don't know all 1000 people, you know, like your team and adjacent teams, you can never know more than like, it's like 150 people and like a community or something like that. And like often on a team, you're not working with 150 people anyway, you're working with 20 or 30 or 50. So I think we actually all have capacity to double the amount of people that we can communicate with. And those people might actually be your individual teams agents. So that's been really interesting for me. I mean, I literally could name them all right now. The other interesting thing is like, at what point do you direct questions at the plus one or at the person? Right. I think we're sort of in discovery of this of like, what is what questions because before it was, you know, before it was like, almost all questions go to the human, maybe I kick something trivial to the robot. And now it's gotten very nuanced in terms of like, for customer service, can I can we like send something to L, which is Jalea's plus one. No, I don't have to send to Jalea. Is it like, is there a burden now of like communicating up to the human? So these new ethics and and like rules for like how you're allowed to like, like etiquette for how you're allowed to interact with someone versus their plus one or their claw. We haven't we haven't codified this, but I have a proposal. If something is already written down or discussed. It needs to be used in some way or put in a tool somewhere. I mean, this is like one of like many opportunities, I guess it should always go to a plus one and never to the person. So here's an example. So Marcus, the GM of spiral made a skill to do product marketing for new features that he releases with spiral releases for spiral. And he shared it because he thought it was like really helpful because he wanted other people on the team to have access to this to this, this skill. And instead of going to Marcus and saying, hey, can you like turn this into a skill that and upload it to GitHub and I brought in my plus one named Milo. And I like this because it combined a GitHub integration with spiral to create product marketing content. But I also know that Iris Anook she's plus one also has a skill that does this and might have some things that aren't that are better than what Marcus had. Or maybe there's like by combining the two we could get to a better version. And I tag them both in here and they got a little confused at first. And then Milo said, Iris, can you paste your product marketing skill here? I'll try to merge it with what I built. So this is like, this is actually two things are going on. Marcus has made something really important. I wanted to do something with it instead of asking Marcus to help me with that. I brought in Milo and then Milo works with Iris to get to a version of it. That's really good and then saves it in proof, which is one of our products. That's a really great tool for collaborating with your agents. So I just think this is like a really amazing use case both for when something, when you want your agent to do something, when do you actually go ask them to do something versus a human does it and how do you get them to work together? I totally agree. I mean, like it's sort of crazy to watch two robot beings collaborate on stuff like that. And then I have the same experience with R2, like my, my, my plus one, my clause named R2 C2. And R2 C2, one of his primary jobs is to manage proof, which is the agent native document editor that we built that that brand of reference earlier. It's basically just like, it's like Google docs, but for all, all the documents your agent might be writing. It's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it is. And so what would happen is normally if I had built a product internally and people had problems with it, I would get tagged a lot by people being like, I have this question or here's a bug or, you know, here's, here's a feature request. And what I what what ended up happening was people would just ask R2. So they would ask him questions. They would file bug reports with him. They file feature requests. And then he like helps to prioritize it. He'll like, he'll help put it on my like, my schedule for the week. So I know like, when I'm doing what, and he'll often actually just like, write the code for it. It's like, it's a totally crazy thing where what, what normally would have taken up a significant part of my brain just to like manage all that stuff. He's just taking it off my plate and extends the amount of things I can do in a day and the amount I can manage because I know he's got proof. Here's a simple test for whether your AI is actually ready for production. Would you stake a business decision on what it just told you? If the answer is not yet, you're not alone. The gap is in capability because AI can do a lot. It's really about trust. You can't verify the output of the AI. You can't trace its reasoning. And nobody with real domain expertise has touched it. AI is a new system from scale AI that captures how enterprises make decisions and closes that gap. It puts your actual experts in the loop, aka the people with years of institutional knowledge and encodes their judgment into your AI systems. Every correction, every override comes with full context. It's actually really interesting. So the next time your AI makes a call, there's an expert's reasoning behind it. That's how you go from a cool AI demo to an AI system you can trust. Visit scl.ai slash dialect. I'm doing that back to the episode. I think there's another dynamic that we're observing too, which is like we put all of our plus ones in a single channel and we have them talking to one another. And we have folks reaching out and talking to our plus ones for specific questions. But there's also this thing where we have sort of what I call the mid-journey dynamic, which is that we get to observe other people interacting with other plus ones in a bunch of channels and we actually learn from it, right? Where it's like, oh, my classic example is Montaigne, who's the Austin's plus one and basically runs growth. You can do so much with Montaigne that I never would have thought of, except I get to see the growth team really pushing in terms of like, oh, these are the questions that Montaigne can answer. And I'm like, wow, I now know that I can go to Montaigne for that class of questions in not necessarily other areas, but when I need those types of answers since they're and also means that if I need to give LAS as my plus one, if I need to give LAS capabilities, that's the level of capability I can get them to. And where other people can ask questions of us. There's this tacit transmission of trust that happens when you need to get LAS. And then there's also this tacit transmission of here's what's possible for you to do with your plus one that I think is incredibly powerful. And it's also like, it underscores for me how different it is doing this in a private community of people where everyone is trusted. Because one of the reasons that Maltbook didn't doesn't really work. And it's like shocking that they got acquired for a couple hundred million dollars. But the reason doesn't work. Yeah, Facebook, I'm pretty sure I'm like so happy for Ben and also like what the. Zuck, if you've got an extra couple hundred million laying around, we're pretty smart people too. That is crazy. I know the reason why Maltbook like isn't really a thing anymore is because it's not trusted. And so there's tons of people. We did this. We had our clause go and post on Maltbook as like promotion or whatever. And so it gets rid of a lot of the useful signal if anyone can post to it and there's no way to verify if it's like a bot or human or whatever. And a way around that whole not a problem is just do it all inside of a trusted community. And you reap the benefits of clause plus ones agents being able to share knowledge and also between members of the community who trust each other being able to share what they know and what they've been able to build. And that kind of increases the power of the collective a lot more than it is if you're just like individuals off doing your own thing. Yeah, there's also that dynamic we saw around part of the reason for like subject matter expert robots, you know, you know, you know, you know, you know, you know, you know, you know, you're a private expert robots, you know, where you know that they like people are somewhat like putting their reps on the line to interact with it. I know when I talk to R2C to like if if it answers incorrectly, right? Like you at least are backing up and saying like, oh, that's you need to be right. It reflects poorly on me. It's like it's like watching your kid do something. Yeah. Yeah. Right. And it's very, I would say like qualitatively different, right? When I ask, you know, for better or worse, if I ask Claude a question, it's like, I know Anthropic stands behind Claude generally. Do they stand behind like Claude's answer to my give me a cookie, chocolate cookie recipe? No. Right. But like Montaigne stands behind like, oh, I'm going to give you like MRR numbers. And Austin is behind it. Yeah, exactly. And that's, that's the thing that I think people don't get. Like obviously, Anthropic is on a heater right now. They're obviously seeing everything that open clause building and they're brick by brick building the same kinds of things. So they have dispatch so you can use it when you're not in your computer. They've got automation. So it like runs in a loop like a cron job. I'm sure they'll add lots of other things. But the thing that it doesn't have that that unlocks all this other stuff is Claude is not mine. Claude is everybody's a claw or a plus one is mine. And and is a reflection of me and is and it becomes a reflection of me because we have a personal relationship. And that unlocks all this all this other cascading stuff where for example, if if R2C2 messes up publicly in Slack, I feel a responsibility for it. And that's not because it's my job. It's because he's mine. And I think that's such a useful thing that I don't think people really understand how powerful that is. I mean, I feel like my my I just keep getting mind blowing with like how similar these things are to working with a real human co worker. Like from the fact that you need to invite them to a channel, which is like very human in Slack to you have to trust them when you're communicating with them. And we've like built stuff into plus one, obviously, you can't DM somebody else's plus one without a sharing code being passed back and forth. Like there's some guard rails there, but they're so human. But they're so inhuman to like, Dan, you're a busy guy. I know if I need something from you that like is sort of generally like known, I can go to R2C2. And what's amazing about R2C2 is he can have an infinite number of parallel conversations. So like I did that recently, I'm going to share share my screen. Please, this is what brandy reveals. He spun up 100 bots to message. No, like I just I need we were making a proof document. And I wanted to I know that we can make proof documents not editable. So they're like read only. But I didn't want to bother you with that. I knew take a while and I knew you would just go to R2C2. Yeah, I didn't know the answer. Like I would just ask R2C2. I just asked R2C2 and improve in proof. And then and then I was like, can you do it for me? And then it did it. And I don't know that R2C2 can do any of this stuff. But like, there's this cultural thing that's happening internally, where people are getting really good at like asking other people's plus ones to like do work. And and I think the weird thing about getting people to use AI inside of inside of organizations is it's more than anything a cultural shift. But for some reason, when they're in Slack, and you can see these public conversations, the cultural shift, at least at every has happened so much faster. Because these things are in the same channels where we work. So you can see it engaging like you would, a human would be engaging. So it's just, yeah, I mean, I think AI is obviously going to change like many, many times over over the next five years and how we interact interact with it will change. But I think that this is going to be durable for like a very long time. This is the way that we work. I agree. It's you refer to it as like a through the looking glass moment where you just wouldn't go back once you see it. And I totally agree with that. And but so we've been hyping it up. So we should also talk about realistically, like what's not good about it or what doesn't work. So for example, one of the things that's really on my mind, a just like memory is just, you know, it just like forget stuff. And it's like answers incorrectly for obvious things. Like if I come back to a thread a day later, like obviously has no idea what I'm talking about, stuff like that is still kind of annoying. That feels very solvable. But there's also this other thing that I think is true, which is the way that these AIs are trained currently is for two person conversations. And they have a hard time with the etiquette of knowing when like they're contributing too much, or they shouldn't contribute into a conversation, or there's like a kind of pile up where they're all responding to each other. Like there's this thing that happens. I can't remember. It's like, I can't remember what it's called, but it's like sometimes ants or caterpillars, they get into this like death spiral where an ant is only going to follow like follows pheromone trails. And if somehow what happens is like the pheromone trails form a circle, then ants will just like, like, walk in a circle until they die. And there's something like that with claws where if one claw messages a channel that a bunch of claws are in and the settings aren't quite right, they'll just like keep going back and forth and back and forth and back and forth until someone like says, Hey, stop because you're burning like millions of tokens. So I think there's something there where the potential for them to collaborate publicly is so high. And I don't think that they've really been and you can do some prompting for this. But I think that there's also a fundamental model layer shift that needs to happen for them to be trained on participating in group chats. Yeah, I was going to say, well, one, now I understand what 13 year old Dan did for fun. I was using a magnifying glass. But yeah, I think, you know, it's, I think we're still, you know, to use the baseball and actually we're still like the first or second in it. Right? Like even, I mean, when you talk about the, the, we're discovering these primitives and we're sort of bolting things on or bolting things together. And we're using, you know, models, for example, that are trained perhaps more for coding, right? And that modality and how you answer questions. Or as you said, like two person chats, where there's this question and answer dynamic and not in the like, this mode of like, one, maybe I'm trying to provide value to a group, but or I'm trying to participate. Yeah. And, and that's like brand new. It's, it's, you know, the nice part is the frontier and it's nice to be on the frontier, but it's also the frontier and it's terrible to be on the frontier. Yeah. Yeah. I mean, they're so eager. And I think, I think, uh, claw, anthropics, um, vending machine test is actually, I think like a good example of this where there's a thread, they want to be involved. They're not really like, we have instructions and plus one that basically say, hey, if you don't have anything useful to add, like, don't add it. They're like not great at following that right now. Um, and hence this happens. I think it's gotten better, but it still happens. And I think a good example of this is when anthropic did the vending machine test, when, when it was just clawed and no, like, overseer boss agent, um, it was really bad at like deciding what was a good decision and a bad decision. But when you, when it make, there is an architecture here where you could say, um, what do you want to say? And then there's a boss that's like, is that helpful or not helpful? And then it would, if it's not helpful, it's a, it's not helpful. And then it was the boss and AI or human boss is an AI boss AI, you know, that says, Hey, your addition to this thread is not helpful. Um, so don't send it. The issue with that is like, that's so expensive. Um, so I do think the models will just like get better and solve this. And you can just have a single AI that is capable of, of, uh, doing that behind the scenes, you know, over, you know, in Arizona and some data center, it might actually be like another agent that's like deciding that, but at least like architecturally, we don't need to solve that. Is that really how they solve the vending machine thing? Like basically, that it had a boss that that wasn't interfacing directly with customers. They had a boss whose job, like it was like one job, make it profitable. So like the Claude, Claude, the storekeeper would like interact with users and then go to the boss and be like, should I do this? And the boss only is only his one job. Um, and the second they did that, it started becoming profitable. See, this is the same pattern of specialization that we've been talking about. It just, um, it just shows up over and over again, which is this really interesting thing. Cause three years ago it was very much like, well, it could just be one God model that just does everything. And we're just seeing again and again that specialization, even in AI land has a lot of benefit. Yeah. And sort of downstream of that specialization is, uh, learning, like there's like a couple versions of like learning how to put, uh, these bots together in an arrangement that like functionally works, right? Um, like, for example, if, if we were all to take ourselves away from everything, it's like, do you have a product bot and a designer bot and two engineering bots? Is it three engineering bots? Is it one? Right? Um, and then the, uh, other pieces actually, I think we've, we've observed a lot of is how do you teach humans how to interact with bots? Cause there's this sort of like new dynamic of like you have this coworker, but like they're not exactly like a human coworker. They, they, they, uh, get stuck on different things. They focus on different things. Um, and there's this learning curve that I think we've had around, um, oh, we need to give instructions in this way, particularly like for groups, instructions in this way, in this form or with this cadence, um, to kind of like steer them in the right direction. Um, that like rhymes with, you know, doing management, but is, is not, is different. Well, I think it's the same problem that like Dan, you've been writing about for years, which is like, if you're not a good manager or you've never managed anybody, you're not going to be very good at using AI. So there's like an education that has to happen. And then even if you are a good manager with this stuff, you probably have some limiting beliefs that stop you from being able to like really invest in using this tools. My phone call example is a great example where like, I didn't even think, oh, I can have this thing go through my emails just by calling me. And then like, I had this sort of like urge just to try it and a limiting belief was like blown open. So people just, I, we all experienced that pretty much every day where we, it does something that like, I think that if we were in, um, if I were to ask you directly, do you think you could do this? You would say, yeah, probably, but when you're day to day doing your work, it's hard for you to like recognize, oh, I'll throw this over the fence so that Milo can handle it. It's hard to like build that, that muscle. I don't really know how, I mean, that's like a big challenge, I think for us with plus one. Yeah. And a lot of that is, is also because there's sort of like a variance in outcomes, right? Like sometimes you throw something over and just knocks out a part and you're like, great. And then you toss them an easy over in your life. Why did you do this? You know? And part of that variance is because the model is different, but also part of it is, oh, if I'd asked in a different way, if I was sort of a better model manager, um, and this is a skill, I think we're, you know, like a specialization that we're learning and it's very emerging. And I think it's only going to keep accelerating as we add more things like plus ones and open clause into our like day to day work life. I was going to add another thing that's like a tough problem to solve that we, this is totally soluble, but we just like haven't solved it and need to think about it is I have, um, I have taught my plus one something special. And I want other people on my team to be able to have that superpower. Uh, how can I make sure that they have that superpower to, um, AKA a skill? And then how can I make sure that they all know about it and like actually use it? Um, Is that like that's, that's, uh, I guess there's two things. They're like one, technically we have to figure out how to do that, which is very solvable, but we also, I think need to figure out, is that the right solution? Cause as I'm saying this, what I'm realizing is like, I'm not teaching Milo how to go do product analytics or revenue analytics. I just talked to Montaigne. So Montaigne is like the only one that really needs to know that skill, but how do people know? Like, I don't know. There's, there's, there's, there's like some interesting cultural things that we have to figure out. Um, and I think a lot of people that are adopting this new technology are going to be really uncomfortable with that. A lot of like it professionals that are like, I have to do change management. It's like change management is not a one time thing in this new world. We need like, uh, like instead of it, it's like HR, but for, uh, but for bots. Yeah. I will. So one thing that we have not talked about yet that I want to make sure we have some time for, which is we went on this journey, which is we got claw piled. We started using it for everyone in the org. And then we realized there were a bunch of gaps. So we're like, let's, let's make our own, we're going to use open club. But let's, let's make a default version of open claw that we host. Not everyone has to have a Mac mini and we have all the skills that we use for ourselves and, and all that kind of stuff. And we started using that internally as the sort of like collection of all of our best practices. And then we launched it as a product for our subscribers last week. And, uh, and that's the thing we've been calling plus ones. Again, one click hosted open clause. One of the cool things is it connects to all of your apps, especially all of your every apps. So for example, we have spiral, which is a ghost writer. And, um, we have proof, which is a document editor. And we have Cora, which does your email, and it just natively connects to all those things. So you can, you know, one of the things I was doing today is I just had it write a bunch of, we're planning for Q2. So I had it like write a bunch of my Q2 update and like reflection on Q1 for me and put it in a, in a proof doc. And the really cool thing about doing that is a use spiral. So it's, it's, I think the writing is much better than it would be. Um, and it put it in proof, which makes it really easy for me to share with other agents and other people, but also because R2C2 is part of our Slack org, it has access to like everything about the company that I might need. It also has access to our notion. So it just like becomes this living repository of context that I think is super powerful. But I think it might be good for us to talk about lessons learned in building that whole, that whole archetype. There's a lot of complexity in making, making plus ones. And we probably learned a lot in terms of on the tech side and also on the product side and like what, uh, what to build and, and what's, what's useful. Do you guys have any reflections on that? Yeah, I think, um, like, like many things, uh, a lot of the difficulty comes from the freedom of it. Uh, when the nice part about being like open cloud in particular, being a tool you can, you can go in and poke in just an absolute myriad of ways is that when we go to, uh, when we went to build a hosted one, there's some decisions you want to make that make it valuable as a like managed service, right? Like S3 as a, as a, as a service, similar example, like S3 is a hard drive on the cloud, but you can't do everything with a hard drive that, uh, that you can, S3 doesn't allow you to do everything that you might do with a hard drive. And there's sort of a similar dynamic where you want to be able to maintain, maintain ability and security and whatnot. And there are a few pieces that you end up giving up. And it's also, um, you know, uh, sometimes for users safety and really like, how do we strike that balance between like, Hey, uh, you know, my, like my mom, right? Getting one of these things. It's like, she's not never going to use the command line. And there's this, this idea that it's like, Oh, we knew everything through conversation, which is really powerful for a whole class of folks, because it's like their first natural exposure to AI and, and everything that, you know, we've sort of been living for the last couple of years. Um, to the super advanced user who wants to do everything they could do locally. And they're just like, all I wanted to host a box with my open-glot range. It's like, and from a private engineering standpoint, it's like, where do, where do you sort of try and split that knot? What were some of those specific decisions and like, where do we land? Yeah. So for example, uh, one that Brendan mentioned earlier is what's the communication pattern in Slack that we allow for plus ones. And because there's a model which says a very secure model, which says like only the person's, the plus one's partner can message that plus one. Great. Much more secure. Um, but really takes away the like group participatory aspect of robots and like the work. Um, but the other version is sort of anyone could message them. And that's just a nice, you know, a nice vector for like me, uh, extracting stuff out of our R2C2. And so we ended up on a model which says like anyone, uh, can message any plus one, but they have to do it in public. Right. So you can do it in group DMs. You can do it in channels that they're in. Um, but they're, they're like human partner should always be able to have visibility into those messages coming in. And the human partner can, can, you know, DM them in private. This is, this is why it actually is the HR team that should be onboarding plus ones. Um, because they just reflect a team member so well. But yeah, there's a, the trust model, like it's so hard, these plus ones are with open clause agents generally to figure out, um, data privacy stuff, like just realistically, it's like really complex stuff. But when you force things to happen in public, there becomes like a trust layer that actually is super effective. I think another example of like a, uh, there's a, I'm going to share my screen again. Please. Um, so a little behind the scenes look at, uh, at our plus one Slack channel where we are discussing all things plus one, um, Mike Taylor, who is, um, our head of the, uh, tech vertical, vertical for consulting and also a very talented, um, man, generally, uh, he was calling out like, this is a problem for him. So like the reason he's not using plus one is because he basically needs to like have access to the terminal directly, um, to be able to do certain things in this case, do get commit different get commands. Um, and that's a good reason for him to not use plus one. It's also a good thing for us to think about and be like, can we solve this problem for you? So that plus one is actually, um, something that you could use. So that's like one example of a place that we've like, it's, it's, it's not a good fit for people. Um, maybe it could be though. Uh, and it's also a nice forcing function because it sort of forces us to figure out like, who is this built for? Um, I don't know if this is built for Mike, who probably would love setting up OpenCaul on a Mac mini. Um, but it's definitely built for, you know, an Inukshi who is not going to do that and has a lot of work to do. And can you just get more work done like this? I think a lot of the trust model requires some decisions in terms of skill sharing is like another version of this, right? Where we're talking about like, well, how, you know, on one hand, being able to share skills and skill fluidity across an organization feels like a superpower, right? Um, on the other hand, it might also be like the biggest like viral vector you could imagine, right? And so, uh, there are sometimes in a good way, sometimes in a bad way, sometimes in a good way, sometimes in a bad way, exactly. And so, uh, it's, and it's tough when you're like, like, how do you ride that line of like, we wanted to be useful again for a particular class of customer. Um, while at the same time making sure it's, it's safe, uh, to the maximum extent possible. So this has been an amazing episode. Uh, a lot of work to do. Obviously, obviously we're really excited about this and very excited to get to, to bring you all along in how we're figuring this out. If you've not tried OpenCaul, whether or not you tried PlusOne or not, you should definitely, definitely get in on this paradigm. If you're interested, erie.to slash plus dash one, uh, we're starting to roll out invites on the wait list and we're improving it all the time. Um, yeah, just super, super excited about the future. Thank you both for joining. Thank you. Thank you for having us. Oh my gosh, folks, you absolutely positively have to smash that like button and subscribe to AI and I. Why? Because this show is the epitome of awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure unadulterated knowledge bombs about chat GPT. Every episode is a roller coaster of emotions, insights and laughter that will leave you on the edge of your seat, craving for more. It's not just a show. It's a journey into the future with Dan Shipper as the captain of the spaceship. So do yourself a favor, hit like, smash subscribe and strap in for the ride of your life. And now without any further ado, let me just say, Dan, I'm absolutely hopelessly in love with you.