The AI Daily Brief: Artificial Intelligence News and Analysis

The Coolest Agents I've Built So Far

21 min
Mar 14, 2026about 1 month ago
Listen to Episode
Summary

The host conducts a 16-agent tournament (Agent Madness) to determine the coolest AI agent he's built in 2025, showcasing a diverse portfolio of projects ranging from individual AI advisors (Holmes) to company-wide strategy agents (Mycroft). The episode reveals a significant industry shift toward agentic AI development and demonstrates practical applications of agents in enterprise consulting, research automation, and portfolio presentation.

Insights
  • There is a massive shift toward agentic AI development, with significantly more agents being built compared to 2024, driven by tools like Claude, OpenAI, and Perplexity Computer
  • Agents are evolving from one-time assessment tools to persistent, continuously-updated systems that improve over time with ongoing interactions
  • The most valuable agents combine multiple data sources (web searches, transcripts, interviews, external knowledge hubs) to provide contextual, personalized recommendations
  • Agent ecosystems work best when built with clear role specialization (individual advisors vs. company strategists) and connected through shared knowledge bases
  • The future of professional representation may shift from static portfolios to interactive agent representatives that can dynamically showcase work and capabilities
Trends
Shift from one-time consulting engagements to persistent, continuously-updated AI advisory systemsMulti-agent ecosystems with specialized roles (Holmes for individuals, Mycroft for companies, 221B as knowledge hub) becoming standard architectureIntegration of voice agents and Slack-based interfaces as primary interaction channels for enterprise AI toolsAutomated research and data aggregation becoming core agent functionality for staying current on AI trendsAgent-based portfolio and job matching emerging as alternative to traditional resume-based hiringSelf-directed learning programs (AIDB New Year, Claw Camp) as scalable alternatives to traditional consultingMaturity mapping and benchmarking tools replacing traditional quadrant-based AI evaluation frameworksOpenClaw becoming dominant platform for building persistent, production-grade agents at scaleKnowledge hub architectures (221B model) enabling multiple agents to share updated intelligenceIncreased focus on agent observability and monitoring (Mission Control Center) as agent portfolios scale
Topics
Agent architecture and design patternsMulti-agent ecosystem developmentAI adoption strategy and roadmappingEnterprise AI consulting transformationAgentic knowledge bases and research automationAI tool recommendation systemsSlack-based AI agent interfacesVoice agent deploymentAI skills training programsAgent portfolio presentation and job matchingAI maturity mapping and benchmarkingOpenClaw agent developmentContinuous learning and recommendation updatesEnterprise AI governance frameworksAI use case tiering and categorization
Companies
OpenAI
ChatGPT mentioned as one of the major platforms driving the agentic shift in AI development
Anthropic
Claude and Claude Code are primary platforms used to build multiple agents in the tournament
Perplexity
Perplexity Computer used to build an AI research library aggregating studies and surveys
Super Intelligent
Host's company providing voice agent deployment and AI strategy consulting services
Lovable
Platform used to build the AI Daily Brief website with a terminal theme interface
People
NLW
Host of The AI Daily Brief podcast and creator of all 16 agents in the tournament
Quotes
"There has been a massive shift over the last three to four months. It is an agentic shift."
NLWOpening segment
"Rather than this type of assessment being a one-time thing, it can just be persistent and ongoing."
NLWHolmes agent discussion
"Mycroft is your digital chief AI officer."
NLWMycroft agent discussion
"The most active power users are using multiple models for different purposes. In fact, the average user is using something like 3.5 different models."
NLWModelMog discussion
"I think Mycroft might be the best way that I've found so far to scale how I and the teams around me help people figure out AI."
NLWFinal tournament decision
Full Transcript
16 agents enter the arena, one leaves. Today we are doing a head-by-head competition to see what is the coolest thing that I have built with AI so far this year. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Welcome back to the AI Daily Brief. We have got a fun little operators bonus episode for you guys today. You might have heard me mention over the last couple of days, agent madness. The TLDR on this thing is that one, everyone is building way more agents than we were last year. There has been a massive shift over the last three to four months. It is an agentic shift. And between OpenClaw and CloudCode and Codex and Perplexity Computer and all these things, everyone is getting agentified. Two, a lot of the people who are going through that are in this community. Many of you have participated in the AIDB New Year's projects or Claw Camp or Enterprise Claw. Many of you are building and sharing things in the AI operators community that goes alongside AIDB. And three, since it is March, the season of March Madness, the big NCAA tournament, one of the coolest sporting events of the year, I thought we would hold our very own bracket to figure out what is the coolest agent that people in this community have built so far this year. Now, as I was planning this, I started thinking about just how many different things I had built this year. Not all of them have been fully formed. Not all of them have been all that useful. But they've all been, if nothing else, helpful in learning how to use some new tool or learning what isn't quite useful for me right now. So what we've done today is put 16 different things that I've built this year, all vibe coded or AI assisted builds, many agentic, up against each other in our own mini tournament as part of agent madness. I gave both Claude and ChadGPT a list of the projects to seed, and they actually came up with almost exactly the same seeding. The brackets are not divided by theme. Instead, we have a diversity of different types of projects, so we can have some really strong head-to-heads. When it comes to who or what wins each of these matchups, I'm going to be ranking it based on a highly subjective concoction that includes technical complexity, usefulness in my daily life, things that I think have value beyond just me, and whatever X factor of I just particularly like the thing. We will keep track as we go through, and ultimately crown a coolest thing that I have built this year so far. You guys are getting a lot of behind the scenes on this one, so buckle up. Starting with bracket A, we have the one versus the eight seed, the Holmes agent versus the AIDB website. Let's talk about the AI Daily Brief website first. This is perhaps the least technically complex of anything that I've built. I maintain this guy with Lovable, and it's basically just meant to be the home for all things related to the podcast and the broader ecosystem. It's a place where I can always dump whatever the latest thing that I'm working on is, so despite having a sprawling mass of different URLs that I mention on the show, you can always just go back to AIDailyBrief.ai and be assured that you can find the thing there. I continue to be partial to this silly little terminal theme, even though it is completely distracting from an information discovery perspective. And the technical stack for this one is just lovable. Overall, this ranks about as low as it gets on technical complexity. After all, it's just a website built with lovable, although it ranks higher on functional utility. And of course, I have some affinity for it. Next up, though, we go to the Holmes agent. One of the things that you're going to hear about is a small ecosystem of agents that I'm working on, sort of an agentified next generation approach to the type of thing we do at Super Intelligent, which is helping people figure out their AI strategy. At Super Intelligent, we deploy voice agents across your company and provide you recommendations around use cases and change management initiatives. And that has been an extremely useful product for lots and lots of companies. The fact that we can deploy voice agents across a much wider set of people than traditional consulting discovery processes means that we get a much better cross-section of the voices that actually represent your employees, and that I think is really useful. My guess, though, is that agents are going to take it to a whole new level, and that rather than this type of assessment being a one-time thing, it can just be persistent and ongoing. Rather than recommendations getting stale and needing to be updated at some regular frequency, they can just be continuously updated based on the new capabilities as they change. As I'm building these individual agents that are part of that system, they all have names related to Sherlock Holmes. And the first one we will talk about is, in fact, Holmes. What Holmes cares about in this ecosystem is not recommendations for the company as a whole, but recommendations for the individual. Holmes has a web interface where he can interview you about your work and the AI you're using and where you can ask him for specific recommendations around AI tools. And he also has a presence in Slack. You can talk to him via DMs or by calling him up in a thread. Based on the conversations that Holmes has on Slack or on the web, it builds a case file all about each individual. The case file includes identity and role, daily work, their AI profile of what tools they use and their comfort level, as well as a deeper profile that includes things like working style, decision-making, notable insights, strategic context, communication preferences. From that Holmes provides a set of recommendations One that it made for me build an AI inquiry triage assistant Since you spending significant time responding to sponsorship and speaking inquiries create a Claude Code app that categorizes and drafts responses to inbound emails This leverages your existing coding comfort while solving an immediate time sink. It even provides a bit of an idea for how to start. From there, you can rate it whether it was a good recommendation or not that helpful, and whether you've tried it. Now, one cool thing is that once a week, Holmes is going to update their recommendations automatically based on it pulling from another Agent and Knowledge Hub 221B that we'll talk about in a little bit. So that is Holmes. Holmes is live, it is in testing right now, and although I do have that fondness for AIDailyBrief.ai, obviously I'm going to give this one to Holmes. Next up, we have my first OpenClaw entry of the tournament. This is my OpenClaw coder, which I call WittyBuilder. Now, this was the first OpenClaw agent that I actually built, and what I was really excited about was the idea of being able to vibe code via telegram from wherever I was, like the gym. I got it all wired up, built some things, but ultimately this did not really enter into my rotation. Part of that was of course that Claude Code released their remote control feature, but it also just ended up not being a really important part of my workflows. Now I have subsequently built another OpenClaw coding bot which is more useful because it takes signal from a researcher, which we'll talk about in a little bit, and writes it to a database in an automated way, so I'm kind of counting those two together. The OpenClaw Coder is up against the one thing that I've tried to build with Perplexity Computer. That is an AI research library. There is obviously a huge amount of research and surveys and studies all about AI adoption right now. And because we use so much of it and capture so much information about it, I wanted to automate that aggregation and give people access to it. This was a one-shot build with Perplexity. It's something I've also built a version of through clawed code, but Perplexity Computer did a great job with the one-shot. Now, one place where it fell down was in the generative search. It did not do a good job out of the box with actually allowing people to interact with via natural language rather than just browse the library, but I think as an initial attempt, it has a lot of promise. I think probably if I had given it a little bit more time, the research library could jump over the OpenClaw coder, but especially with the introduction of the database writer bot, I'm going to give this one to OpenClaw coder. Next up, we have a project inspired by what is probably my favorite movie of all time, Good Will Hunting. You might remember the scene where genius Will Hunting is being called upon to do all of these interviews with consulting firms that he really doesn't want to go to. He, of course, wants to blow them off to hang out with his new girlfriend, Minnie Driver. So what does he do? He sends his best friend Chucky, played by Ben Affleck, to take the interview on his behalf. The interviewing firms, of course, do not know that they are not speaking with the genius that they have heard so much about, and are a little taken aback when Chucky, posing as Will, asks them to whip out their wallets and give him money on the spot. Retainer. So this agent is called Chucky. And where it came from is there is very clearly a new role that is going to be incredibly important to basically every company for AI builders and orchestrators. Simultaneously, there are going to be a lot of people saying that they have those skills. But how do you show that? A resume isn't going to do it, nor is any sort of traditional cover letter. And even a portfolio approach kind of falls down because you can't just drop a screenshot and say good enough. So what Chucky is, is an agent that acts as an agent builder's representative. Instead of sending a potential partner or consulting client your portfolio, you send them your agent representative, Chucky, who can interact with them and tell them about what you've built. So for example, imagine that I was someone trying to decide if I was going to work with NLW. I could press what are NLW's best examples of real work, or one of the other guide questions on the side, and Chucky would be able to interactively pull information from me about different applications that I built. For example, it pulls up homes. Now when you click into homes, you can see that Chucky is both sharing a bunch of information about homes, including things like screenshots, but also providing some context. These things all of course link to the public landing pages of those tools so you can get a more full experience. You can also look at the full ecosystem, which is a visualization that I'm still playing around with, to quickly get a sense of the full breadth of things that the person has built. And if you want to skip the conversation, you can still go to a portfolio view where you can go directly to any of the agents that the person is sharing. I'm still very much in the midst of experimenting with this one, but I'm super excited about the possibilities and think it could be a really cool way to do jobs matching in the future. Now, once again, back on the other end of the spectrum, we have AIDB New Year's. This came together almost as a little throwaway idea as part of my New Year's episode, where I knew that a ton of folks would be looking for a way to up their AI skills coming back into 2026, and so put together a self-directed 10-week program, and ultimately more than 7,000 people have participated. Each week has a bunch of information, plus the ability to show off what you've built, leading to just thousands and thousands of submissions of people sharing their work and the cool things they've built throughout the project. AIDB New Year would go on to inspire another self-directed program that'll be in our list a little bit later but definitely has a fond place in my heart as the kickoff of what I think is going to be a pretty consistent approach for me going forward which is these sort of self programs In terms of overall impact for people outside of me obviously AIDV New Year is biggest but because of the future orientation and what I think it could be in the future, I'm going to give this one to Chucky. Now in our final match in bracket A, we've got another part of the future superintelligence style agent ecosystem in 221B up against a fun little project called ModelMog. Let's look at 221B first. This is effectively the brain that powers both Holmes and Mycroft, who you'll meet later. It's an agentic knowledge base that automatically ingests transcripts from the show, does automated web searches around key platforms, interviews me on a weekly basis to try to understand what's trending in the conversation, and ultimately puts together dossiers of what matters in enterprise AI, which can be used by agents like Holmes to update their recommendations on what people should be using AI for. It's thus flashy because it's not the thing talking with people, but it's the power center that makes the thing work. ModelMog is a fun little project that I honestly haven't really given enough time to and might come back to at some point. There are a bunch of platforms out there that are designed to determine which output of models you like for different purposes, but the idea of ModelMog is to see people's express preferences for models based on different use cases. One of the things that's clear when we survey people as part of our monthly pulse surveys are that the most active power users are using multiple models for different purposes. In fact, the average user is using something like 3.5 different models. So ModelMog brings up a use case like generating a 15-second product demo video and asks you between two different options which one you would use. I put this together walking around the rocks of Jose Ignacio in Uruguay and honestly think it has even more opportunity to be something cool in the future. Ultimately though, while Mog is a fun little side project, 221B I think is going to be much more significant and a really important part of a larger agentic system. Moving over to bracket B, we start off with Clawcamp versus Mycroft. Clawcamp is of course another self-directed program, this time to learn how to use OpenClaw to build agents and teams that was a direct descendant of AIDB New Year. The inside of the platform is a little bit different, although it has some similar features like projects, the ability to check in, and the ability to share the agents that you've built. Mycroft is another AI advisor agent that lives in large part in Slack. If Holmes is concerned just with what the individual is doing with AI, Mycroft is building an overall company strategy. You can talk to Mycroft via DMs or in threads, or you can also chat to him on the web. Like Holmes, Mycroft is going to do an initial intake discovery interview, and with all of those conversations, build out an AI roadmap and strategy for the entire company. That's going to include plans for use cases, updates and changes around data and systems integration, plans for governance, approaches to upskilling in people, and goals around outcomes and ROI. As it chats with you, it's building a dossier on the individual as well as on the company, and that roadmap is getting continuously improved over time as it learns more both about the company through conversations, but also gets access to outside information via 221B. In short, Mycroft is your digital chief AI officer. Again, I love Claw Camp, but there's no question here, this one is going to Mycroft. By the way, for those of you who are not familiar with the Sherlock Holmes universe, Mycroft is Sherlock's even smarter older brother who basically secretly runs the British government behind the scenes, which is obviously why Mycroft got to be the person who's thinking about the company as a whole. Next in Bracket B, we have two projects that relate to Open Claw. One is my Open Claw Chief of Staff, which is sort of a stand-in for actually six different agents. Basically, one category of agent that I built with OpenClaw are effectively just project managers for all the different things that I've got cooking at any time. We've got AIDB, we've got Superintelligent, we've got initiatives like these training programs, the Intel and Benchmarking Service, and lots of other things. Each of those has their own project manager, who is continuously interacting and checking in on a day-to-day basis, and then the chief of staff's job is to keep all of that information organized. For our head-to-head, the chief of staff is up against the Mission Control Center. Once I got up to 10 agents, I found that I was looking for an interface that was different than just the Telegram interface for keeping more persistently relevant information, as well as tracking problems like overdue heartbeats or cron jobs that weren't firing. This mission control center was one of the most technically difficult things I've had to build, and just took an endless amount of back and forth with my Claude build partner to get right, and is something that I found extremely useful. The chief of staff, meanwhile, hasn't really gotten off the ground as much for me, which I think in part is because I didn't empower it with my systems. I've got OpenClaw running off of Mac Mini like so many others, and I haven't really wired it yet into places like Slack where it could be automatically getting contacts. So in effect, the project managers and the chief of staff are just kind of externalized to-do lists that are effectively competing with Notion for me, which is cool, but ultimately not all that exciting. The third head-to-head in BracketBee is a forthcoming benchmarking tool against what is easily my most useful OpenClaw agent. On the OpenClaw side, we're talking about the witty Radars researcher. Radars refers to Opportunity Radars, which is a use case database product that organizes use case across its different function into three categories Primetime which means it viable for just about every company Emerging which means you need some setup but most companies can do it and get value out of it And Frontier which means while it valuable, you're going to really have to have the right setup for it. I think it's a useful way of organizing use cases, and the Witty Radar's research bot's job is to be continuously hunting 24-7 for new inputs from out in the world, studies, research, surveys, etc., that would both generate new use cases as well as inform where use cases should be in that tiering of primetime emerging or frontier. This sort of persistent research has been my most useful application of OpenClaw overall. Maturity maps, meanwhile, are the forthcoming benchmark that I'm most excited about. I think our old visualizations like the Gardner Magic Quadrant have never been less useful than they are today, and so I wanted to design something that was a better fit for the world that we're actually in. Maturity maps organize AI and agent readiness across six different vectors, including use cases, systems, data integration, outcomes, people, and governance, and then uses a simple visualization of on-track, behind, or ahead. On a quarterly basis, we update the maturity maps function by function to where we think organizations should be in each of those areas, and then benchmark subscribers can see where they fit relative to the on-track line and where we think organizations should be. Now, the maturity maps weren't agentic, but they were very much a co-production of me and AI, and I think that they have a ton of promise, but for now, I'm going to give the nod to the witty 24-7 OpenClaw Research Agent. Last bracket, we have the Agent Madness Bracket platform itself versus the Beta Superintelligent Compass platform, which is effectively a power tool for AI adoption folks. Compass, you can see, has integrated things like the opportunity radars and the maturity maps, as well as the ability to build roadmaps, upload company context, and do the sort of self-directed assessments of the type that we do at Super Intelligent. Now, Compass continues to evolve, but I think when it comes to the tournament of the coolest things that I've built this year, Compass definitely gets the nod. So now you have seen all 16 of these things that I've built this year, and I won't dwell too long on the matchups, but for the sake of completeness, in this mini-tournament, between Holmes and the OpenClaw Coder, it very obviously goes to Holmes. I mentioned OpenClaw Coder barely got the nod over the perplexity researcher, and really has only gotten more useful recently because of my database writer, whereas Holmes I think has a lot of potential to be a very cool, valuable tool for lots of people going forward. Chucky vs. 221B is harder. The 221B research hub is the sort of behind-the-scenes engine that a lot of agents are going to need to be useful, but I just think Chucky's approach to presenting someone's build history is something that I feel like could be a really valuable form factor going forward. Mycroft versus mission control. As technologically challenging as mission control was, Mycroft wasn't particularly easy either. And I think the idea of a digital chief AI officer just really has legs, so we're going to give this one to Mycroft. In the open claw witty researcher versus compass semifinal, I'm going to give it to the witty researcher, A, as the last agent standing among the open claws, and B, because I think that the way that compass is evolving, it's kind of just going to be the enterprise house for a lot of these different things. So the witty researcher gets the nod. Now in the bracket A final, Holmes versus Chucky, this might be the hardest one for me in the entire bracket. I really do think that it would be massively useful for every individual to have a Holmes in their Slack, giving them continuously updated recommendations around the AI that they should be using. But I also kind of have a feeling that Mycroft, despite being initially focused on the company's AI strategy, might swoop in and take over what his little brother is doing. Whereas, like I've said a bunch of times now, I think Chucky might be a form factor for the future in terms of presenting the work that you've done. So Chucky gets the nod and makes it to the overall tournament final. Over in the bracket B final, this one is a little bit more of a route. Mycroft is not just taking in information, it's doing something valuable with it in a customized and ongoing way, building out the strategy for your company over time, so Mycroft makes it to the final. Now in terms of who wins, what this comes down to is where things are now. I have grand plans to introduce Chucky to all of you who are doing Claw Camp and to see if we can't do some jobs matchmaking. However, that hasn't happened yet, and I'm still working out a lot of kinks in the system. Mycroft, meanwhile, while it is in testing, is something that I am very excited to release soon and I think might be the best way that I've found so far to scale how I and the teams around me help people figure out AI. So Mycroft is the champion of this AgentMadness.ai mini tournament. Hopefully this is a fun way to give you a behind the scenes in all the things that I am building and thinking about. Remember, if you want to enter your agent into AgentMadness, you can find that at AgentMadness.ai. And if there was anything that you heard about that you thought was particularly interesting, like the Mycroft or HomeSpots, I'll put a link to sign up for more information for those on AIDailyBrief.ai as well. For now, that is going to do it for this bonus operators episode of the AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.