Authority Hacker Podcast – AI & Automation for Small biz & Marketers

Your Claude Code Tokens Are Disappearing

57 min
Apr 3, 202616 days ago
Listen to Episode
Summary

This episode covers Anthropic's Claude code source code leak (512K lines), the implications for competitors, and practical strategies for managing token usage as AI companies introduce stricter rate limits. The hosts also discuss Cloudflare's new open-source CMS M-dash, OpenAI's ChatGPT ad platform launch, and emerging features like Kairos and Mythos models.

Insights
  • The Claude code leak eliminates Anthropic's competitive moat by exposing architecture, system prompts, and sub-agent logic, enabling competitors to replicate features within weeks rather than months
  • AI companies are shifting from subsidized B2C models to monetization strategies ahead of IPOs, with rate limiting and ad platforms replacing free tier generosity
  • Token efficiency optimization will become a critical business skill as pricing normalizes, similar to how fuel efficiency became important after 1970s oil crises
  • Serverless architectures (like Cloudflare's M-dash) represent a fundamental shift from always-on server models, enabling 60-70K monthly views at zero hosting cost
  • OpenAI's strategy appears to be splitting into professional (Codex, premium) and consumer (ChatGPT with ads) tiers to maximize revenue across different user segments
Trends
AI model commoditization accelerating as source code leaks and open-source alternatives reduce proprietary advantagesRate limiting and usage restrictions becoming standard monetization lever as companies transition from growth-at-loss to profitabilityToken efficiency consulting emerging as a new service category as businesses optimize AI spendingServerless and edge computing replacing traditional hosting for AI-powered applicationsMulti-model strategy adoption where businesses use cheaper models (Haiku, Sonnet) for simple tasks and reserve expensive models (Opus, Mythos) for complex reasoningAI-native CMS platforms challenging WordPress dominance with agent-optimized interfacesAdvertising becoming primary monetization for consumer AI products, with CTR and placement optimization driving revenueMemory consolidation and background processing features becoming standard in AI assistantsCross-platform data portability reducing switching friction between AI servicesIPO preparation driving aggressive feature releases and pricing adjustments across AI companies
Companies
Anthropic
Claude code source code leaked (512K lines), exposing architecture, system prompts, and unreleased features
OpenAI
Launched ChatGPT ads platform with $100M ARR from 600 advertisers; self-serve ads rolling out to US/Canada/Australia/NZ
Cloudflare
Launched M-dash, open-source WordPress alternative built on Astro with serverless architecture and AI-native features
Google
Gemini offering data import from ChatGPT/Claude to reduce switching friction; Gemini 3.5 Flash provides cost-effectiv...
Vercel
Competitor to Cloudflare using Next.js framework; M-dash positions as direct alternative
Authority Hacker
Hosts' company; rebuilt website using Astro framework; offers AI accelerator course and Claude Code training
People
Mark Webster
Co-host discussing Claude code leak, token optimization, and AI platform developments
Gail Bren
Co-host providing insights on token efficiency, M-dash architecture, and AI monetization strategies
Boris
Claude code creator who attributed source code leak to human error rather than intentional release
Tarek
Anthropic employee who announced peak hour usage multiplier changes on Twitter/X
Joost de Valk
Yoast SEO creator already migrating his website to Cloudflare's M-dash platform
Quotes
"They just fucked it up. It happens, you know, it's just it's not great, but it happened."
Gail BrenEarly in episode
"Now it's kind of like open the floodgates to everyone kind of ripping off everything they do basically."
Gail BrenDiscussing leak implications
"We're all drunk on free tokens right now. And then that party is going to slowly go down."
Mark WebsterOn token pricing normalization
"It's the start of that. Yeah, I think so. It's like they've given a lot."
Gail BrenOn enshittification of AI pricing
"I think they really captured the what's been missing with all this kind of like AI web design stuff."
Mark WebsterOn M-dash CMS potential
Full Transcript
If you're using cloud code right now, two things happen this week that affects you directly. First, Anthropic leaked cloud code's entire source code. That's half a million lines of code. All the unreleased features system prompts everything. We'll go through what was revealed and what that means for business owners and marketers. Second, usage limits are burning faster than ever. We've got some practical tips to stretch your tokens further. And Cloudflare launched Mdash, an open source WordPress alternative built on Astro, the same framework we vibe coded our website on. I'm Mark Webster. I'm joined as always by my co-host and co-founder of authority hacker, Gail Bren. Yo, it was a big leak. I think I read the number 512,000 lines of code, 1,906 files. Basically, the entire source code of cloud code was leaked. Like, how did this happen? Did they just vibe code it? And was that some kind of mistake? So that's what everyone was saying on X, but the guy who created cloud code, like Boris, just said that it was a human error, actually. They just fucked it up. It happens, you know, it's just it's not great, but it happened. And yeah, that's a big, big blow for Anthropic, who like historically has been one of the only developers of these kind of harnesses to keep it closed. Code like before, like almost everyone else is open source. So for example, codex from open AI is open source. You have access to the code for a long time, but cloud code was trying to build a bit of a mode by keeping things secret and there's no more secrets. And I think it's very sad they did have a decent mode. They had a very loyal following of users. Like even when, you know, you could argue that codex open AI's model was getting better for certain use cases, like like coding, still a lot of people just were like, nah, I think cloud's good enough for what we want to use it for right now. No point in switching. I like the features. I like how it works. And, you know, that was fine. But now it's kind of like open the floodgates to everyone kind of ripping off everything they do basically. And they may, I don't know, do you think they may even have to make cloud code open source completely or? Nah, I think they would just stop over. Like the point is like, yeah, now it's like, you can give these code base to your AI agent and be like, Hey, build the same feature on my AI agent. And then it's like, it's going to take, you know, half an hour and you'll get a walkable version. Like you still need to refine it if you have your company and so on. But the point is, it's that simple. You have the whole blueprint. You have every single system prompt and how they use it and how they use sub agents, etc. That was hidden behind the scenes. And it's like the whole architecture has been leaked out, which means in the near future, probably all competitors will be able to rip off all the best features of cloud code and replicate them in their own way inside the codes, which means a cloud has to rebuild their mode from scratch. And that's kind of a blow to a company that has been probably doing the best job in the industry doing this. At the same time, if you consider the rate at which entropy is shipping features, which is pretty much every day, there's something new in code, it will only take a few new features after that leak to come out for them to start rebuilding a mode and people wanting to stick to it on top of the fact that even with recent rate limits, you still get an incredible deal in how much usage you get for how much you spend with a subscription. Though we have, we do actually have some insights into some of those new features because they were also released in this. Yeah, because they have beta, right? Let's talk about some of those now. I want to start with Kyros, how you pronounce it, I think. I guess. It's like an always on background mode. And they have this kind of auto dream feature where it kind of like thinks about your work while you sleep sounds kind of spooky. What's going on here? Well, think about how humans consolidate their memories. Actually, your memories get consolidated inside your brain while you sleep, which is why they call it auto sleep because they mimicked the logic of that, which is when you're not using it, the model kind of goes back through your chat history and things that were done, et cetera. And essentially it makes a summary of what happens and keeps like a much shorter version of that inside its memory files to get a little bit of continuity between chats and the ability to work on the same code base file system, et cetera. But again, like, yeah, when your memory is consolidated in your brain, like you don't remember every single detail, they kind of like you get a summary in your brain as well. And so they've just mimicked that. There was some of that already, but I guess they are just building more systems in there. That will allow like a feeling of continuity when you walk inside the project on code code. Yeah. Apparently it's going to write like a daily observation log. Yeah, it's going to judge you. All your all the chats you've had that day, everything you've said. Yeah. So it's, I guess you're right. Like is kind of like mimicking how humans think about and reflect on things they've done previously, maybe. And I've even seen the leak, another leak of like apparently they're walking on like an always on mode, a little bit like kind of like open clothe that wakes up and like things of like new features it could build into your code base, et cetera. So it wasn't in that leak specifically, but I've seen other leaks that talk about that. So you can see that entropics trying to make clothe work when you're not at your computer anymore. And that's quite interesting actually. They also have this ultra plan feature it's called, which apparently off it offloads complex planning to a cloud version of opus. It says for about 30 minutes or so. Like what does that even mean? Like how is it able to access your local context when it's doing that? I'm not exactly sure how it's going to work, but it looks like a little bit of a kind of like deep research for planning mode kind of like kind of like a deeper version of that. A cloud version of office. It's always a cloud version. Opus never runs on your computer. So it's like, I think what this means is like, you know, when you use code code on a desktop app, you can choose to use your local files or to use a cloud version of it. It depends where the cloud code runs. So is the cloud code instance running on your computer, which then calls the models to do things or is the actual terminal that calls the model running in the cloud. And that means, for example, if you have environment variables set up with API keys, then they need to be set up on that cloud environment and that doesn't use your local resources. It's a bit technical basically. But it won't change much. Like another way of looking at it is like typically when you're doing something, you plan it and then you do it, right? So but plan often is kickstarted by an idea, which you could have anywhere like in the car on the bus, like out and about. So if you're able to kick off the planning, like on your phone, that remotely that planning happens, you know, in, in the cloud, and you say everything's in the cloud, then you can continue that plan in the app. Though, you know, you can do that. You can also do remote control. Like, you know, there's also remote control that would run it on your computer. So it's like, it's just, Entropic has these cloud environments. I think it's helpful when you're collaborating with multiple people on the same project. Cause then it's like the thing is like hosted on their thing. Um, and you can continue there with multiple people. But, you know, Entropic now releases so many features that do the same thing. So like remote control is kind of like the local vote, like you can run locally, but remotely, whereas like if you run in there, it's really depends. Does it run on your computer or does it run on Entropic's computer? Basically, that's really the main thing that you need to think about. And, uh, permissions, et cetera, would be different. It's a little bit technical. The idea is it will just like, it will work probably only on Entropic's cloud environment and not on your local computer. And it will be deep research for plan mode. But think about it like that. And they also have this undercover mode. It's called coming out, which apparently like hides the fact that it's AI. And I think there was some kind of talk about the plan to just release like this autonomously go around and like update open source code and improve it and things like that. It looks like a dev mode. It looks like, you know, if you're developing cloud code and you're having that, because also this undercover mode mentions sonnet 4.8 and opus 4.7, for example. Like it actually says in the prompt, like do not mention which model is used so that if you're using some kind of development model, it's not going to be in the release notes on GitHub, for example. Um, but the point is they name the models in the instructions in the prompts for the model. So it's like some new models that are not really yet mentioned. So to me, these undercover mode is kind of like, you know, what. Entropic employees are using on pre releases of cloud code, that kind of stuff. I guess, uh, not a hundred percent sure. Again, this is all interpretation. It's not like a fact for sure. We only have the client part of the leak, right? We don't know what's running on entropic servers. And so like we only have half the story here. And speaking of models, there's a talk of a new tier of model of above opus. This is called mythos. Can you just explain like a little bit about this, like on what's the difference between so that and opus? We don't know a lot. Right. Again, these are unreleased things. Uh, they haven't communicated about it and all we have is leaks. And actually, I think it was a blog post that leaked. It wasn't in that source code. And I think it was saying that it was only on Q3, which is quite far away. And it looks like entropic is remote to IPO this year. So it would make sense that they have like some big splashing releases. Just before the IPO to inflate their valuation. Because it's quite dangerous when the IPO, if like the stocks tank, when the IPO, then the whole industry could crash actually. It's like they need to kind of prevent against that. And then they probably send bagging a little bit now and they'll release their big stuff then. My understanding of mythos is like it's kind of like one tier higher than opus and it's extremely expensive to use right now. Like in their blog post, they're like, it's only used to researchers as well because it can literally just hack anything. I had found like a bunch of vulnerabilities in a lot of software, a lot of people use. So it's like they're almost probably preparing and kind of like almost communicating to all these open source software that people use. And like, hey, there's all these vulnerabilities, our model found that could be exploited when that model is released. So there's a security concern. And there's a cost concern with that. But my understanding of this is going to be similar to the pro model on chatGPD, which most people don't have access to. The only people who have access to it are the people on the business plans and the people on the pro plan that costs $200 per month. At this point, I suspect OpenAir is going to release a $100 plan this week that will also give access to this model. But the idea is like, this is a very, very slow model that, you know, reasons for 30, 40 minutes runs a really long time, but it's arguably smarter than anything else. It's way smarter than opus, et cetera. Google also has something similar called Gemini DeepThink. And it's mostly used for research, like, you know, genetic, mathematics. Science-based stuff. Yeah. So usually most people are too dumb to get good value of these models because literally the problems you submit to them are not, are not like complicated enough that you'd see a massive difference from like a thinking model. And it's not really for your day to day tasks. It's for this kind of like high level. And as a dumb person that's probably used that model, like, do any of these leaks actually matter to me, the small business owner? Because it's like, yeah, okay, Kairos Ultra Plan, some cool features there. But it's like, it's a bit of, it seems a bit of a whatever, like, I'm just going to keep using Cloud Code to do my marketing. The continuous improvement of the, of the tooling. I think what matters to you is that a lot of the features that exist today in Cloud Code and in this leak are probably going to make their way to the competition. And what that means is like, you know, entropy basically was building up their mode so they can charge more and they can start building up their profit margins. Hopefully before this IPO as well, so that they can actually show some good revenue numbers that will inflate their valuation. They want to make money, right? That's pretty much the plan. And so that kills a little bit this plan because the mode was not just the model. There was all the features in Cloud Code and you're getting attached to them. But now the competition can copy them much easier rather than trying to guess what the prompt is behind the scenes and how the architecture works on the stuff that is not shown. Now they can just literally read that and see how it works, which means again, it will take just a few weeks to see a lot of these features pop up on Codex, pop up on Open Code, pop up on Cursor, etc. because they will 100% use this. That means they can inflate their price as much because their mode has been reduced, basically. So for you as a user, it probably would be a good thing actually. So essentially for me as a user, even though Gemini, even though OpenAI are trying to replicate Cloud Code as much as possible, it's kind of never been quite there. And those subtle differences maybe would stop me of switching. But now if they want, they can replicate it. Yeah, it's like massive. See the learnings for them. Friction is lower. Think about how much these engineers that build Cloud Code are paid per year. Think about the amount of value there is in looking at that code and how they've done it and the logic behind it and so on. And so that's the point. And what that means is you'll be able to switch more seamlessly between these systems. And that also means that because there's less mode for a company like Entropic, they won't be able to overinflate the price they charge you against what you get because the same features will be in the competitors products, at least until the point of the leak, basically. Oh, does this leak make you want to use Cloud Code more or less? Are you more likely to switch from it? I mean, to be honest, my main issue with Cloud Code right now is more usage limits than features at this point. It's almost like most people don't use it to the maximum of its abilities already. And the challenge is actually to build your setup around it. Like we don't need more features. For most people, it would take years before they actually use it properly, even if nothing was released. People at the bottom, not Cloud, basically. And then as you use it more, like your token usage increase and then limits become more than you should. Limits and models getting done and that kind of stuff. Like that's kind of like the main issue. Let's talk about that because that's been another big story over the last week, yeah, is the Cloud Limit Crisis. So basically, they silently introduced off-peak, most stuffy power. Not silently. They did it during the promo, right? Like they made it. They were like, oh, I get two times more usage during off-peak hours initially. And then they switched it to like you get less usage during peak hours last week. Yeah, so a peak hour multiplier, a hidden peak hour multiplier essentially, that during US business hours, your tokens burned faster for doing the same amount of work. And this was like a somewhat silent thing. It wasn't silent. It was announced. Like, I mean, it was announced on Twitter, etc. Like it's not like super public. They didn't email you. But it's like, if you went on X, like there's a guy called Tarek that works for Entropic and he said it. So like the way they introduced it is like they gave you two times more usage outside peak hours initially. So that was a bonus. People were happy about it. And then yeah, and that's what ended on March 28th. Yeah, towards the end, like a few days before the end of that promo, then Tarek from Entropic was like, actually, we are reducing the usage off outside of peak hours. So now the 2x usage off peak hours is gone. So you're back to your normal usage during these hours and you get lower usage during peak hours. Okay. So you get less now. I mean, that's not what you said before. Like the peak hour multiplier, what I pulled this from the, there's a mega thread on Reddit, which basically said that they had silently introduced this. They had sounds like what that they introduced it silently for like two days and eventually like, oh, actually we reduced the limits during peak hours. Yeah, that's true. Yeah. Okay, fine. But I mean, that, I think it cost them a bit of goodwill. I see a lot of people, I mean, people say like, I'm going to move away from it all the time. Like, are they actually, I don't know. But it's been a real problem for certain users. There was a few just oddball, it seemed to be like genuine bugs use cases where people were sending like a high message using 30% of their limits or like, you know, before they even sent anything, they'd used 100% of the limits. And yeah, okay, a few bugs here and there. I think there were also a few people who were just, you know, had a ton of MCPs in the background and, you know, we're just like using it very inefficiently. And that was causing some issues as well. Yeah, so you can see that actually this is an anthropic employee, you can see Claude, like she works on Claude, anthropic and she actually also posted that they were aware that some people are essentially getting the limits used faster. So they probably introduced a bug at the same time at which they reduce the limits, which like went way faster. Like it's true. Like I had some days I jumped 25% over one desk on the max plan. So it was painful, basically. And they said this affected like 8% of users or something, but it seems like most people reduce limits, not for the bug, for the reduced limits. So like when they said they reduce the limit on the peak time, they were like, Oh, 7% of people are going to hit limits when they wouldn't have hit them before, not for the bug. The bug would probably way more. I can see where the confusion is coming at this because there's like a couple of different threads here and like people aren't sure which of them are affecting them. And also they're lowering their way through because like 7% of users, okay, but like it depends where you live. If you live in the US and this is like your main walking window, that's probably like 20% of users or 25% of users. And then it's like, you know, they can't all their Asian users where it's like middle of the night, for example. And that kind of like all the free users don't use it or whatever. Exactly. So it's like they lower their way around it the same way Google lawyers their way around the impact of updates, basically. And so like the reality is a lot of people are affected from what I've seen and people see lower usage limits. Now in Europe, like for us, it starts at 3pm finishes at 9pm with the time change. Not too bad to be honest, I do most of my work in the morning, so I can survive that. But for people who live in US, it's quite painful. It's almost like an extra tax basically on using AI. So scheduled tasks are your best friend at this point. Like you can use scheduled tasks on code code or on co-work and you can schedule them for off peak hours and you'll get more usage. I showed my Notion system last week. I have a scheduled thing. I can set a due date and it will just run at the time that I set. So there's ways to get around it. It's not nearly as good, but welcome to... We warned people several weeks ago about that. And actually I have this case study of this guy who's like a big ex user as well. And he just ran a test like Codex versus Clotcode knowing that Codex also was running a two times usage promo until today. So it's finishing today actually, which is why I'm saying they will probably introduce their 100 dollar plan very soon. There's been some hints on that as well. And so you can see that for the same task that he had 93% usage left on his five hour rate limit. Whereas for Clotcode, after consuming 80% of his five hour rate limit, he actually, it wasn't done yet basically. So the contrast of usage limits of Codex versus Clotcode on the same task, on the same prompt was quite significant. We don't know if that was affected by the bug. Hopefully it was. But like, you know, the value you get out of a Codex subscription is a lot better. And you also have to consider Codex is half of that now. So it still would be much better, but not as big of a contrast. So I want to get your take on how we can use tokens a bit more efficiently. But before that, I think I have a broader question. Is this all just a way for them to kind of like stop subsidizing their model? Like is this the kind of in-shitification of the data pipeline? It's the start of that. Yeah, I think so. It's like they've given a lot. I mean, again, a $200 subscription is like $5,000 of API. If you use all the tokens, which most people don't, right? So it's like in reality, you probably, and then also the API price is not the cost for them. I think I saw something like they're losing, you know, 80 cents on the dollar or something. It's quite possible. Yeah. So they're essentially trying to buy market share. The thinking is whoever is still there at the end can then jack up the prices and have a big, big deal. And then the IPO is coming soon. So they need to show some numbers. Like by the time the IPO is coming towards the end of the year, probably, like again, nothing is announced, nothing is sure. But like, you know, they're all kind of hinting at it, like wink, wink, it's coming. And so, yeah, they need to start showing like increased revenue numbers, and they're going to have to squeeze it a bit more. Like obviously, they still want to show high gross as well. So it's going to be a balancing act for them between like the pricing and the gross. But yeah, it's quite interesting. It's like they need to make more money and all these companies are doing this. So even Google, for example, they massively slashed the limits on anti-gravity because literally no serious person would use anti-gravity and only three. What is anti-gravity? Anti-gravity is like at Google's kind of like cursor competitor. It's like a VS code fork. But like the point is because it's the worst of the three, only people who cannot afford open AI and on Tropic would use it. And so Google has the worst user base that doesn't spend any money. And therefore, they had to slash the limits because again, they were just casting them out. So across the board, these companies are slashing their expenses on people who do not make that money at least. Yeah. Does this kind of feel a little bit like cell phone networks in the way that they all tried to be this fancy feature loaded thing? And then at the end of the day, they all just become kind of like dumb data pipelines where, all you really need is data to use it. And then they weren't too much more than that. I mean, everything is just becoming normal. Right? It's becoming a utility, a new utility that you pay for. And it's like, yeah, they need to kind of normalize their cost against their revenue. That's it. It's just becoming that. Eventually, it's just the reality. Like we're all drunk on free tokens right now. And then that party is going to slowly go down. Yeah. And I think that party might end quicker than a lot of people realize. So businesses are going to have to get a lot more efficient with the way they are using this. And some businesses, which are operating on very low margins or operating inefficiently, it's just not going to be worth it to pay for tokens to do some of these things. A lot of people kind of run like full LLM driven pipelines. So like skills is a good example. Right? Like if you run your skills just on Markdown files, it's like you use a lot of tokens to interpret that Markdown into a workflow. But like the better way of using it is to actually create Python scripts that the model can use to like, you know, automate part of the work without using any tokens. Like a lot of the skills I build have a lot of Python scripts because that just makes things more efficient. And it's like, that's the thing right now. Nobody's trying to optimize anything because it's so cheap. But I suspect there will be a phase sometime next year probably, while everyone's going to be like, oh, I have all these AI processes. I spend 10 grams a month on tokens because the prices went up so much. And now it's like, I need to kind of optimize all these things. And I actually suspect a lot of our work next year is going to be communicating around that and helping people around that. Like right now it's adoption. And then eventually it will be like optimization of these things. It's kind of like gas guzzling cars in the 1970s when there's like an oil crisis, then suddenly there becomes a push to have, you know, more efficient vehicles. Sounds very current. It's the same principle. Yeah. Is there anything people can do right now today, though, to make their token usage more efficient? I think if you use cloud code, I think one of the big ones is like Sonet 4.6 is actually a very good model. And for many tasks you can use it. And actually now they've changed how skills work and you can select a model for the skills. So even if you're set up on Opus, you can say when you run that skill, this skill runs on Sonet, for example. So if it's a simple like upload this or reformat this or whatever, like honestly, switching to Sonet, I think it uses limits like three times slower. And so being able to kind of like granularly assign models to things is a good way to reduce that. Same like... There's three models, right? So you got Opus, which is the top one, Sonet, the middle one and Haiku, which is the bottom one. When is the use case to use Haiku? Haiku is like simple tasks. It's like, you know, again, format this, read this and summarize it, go and find the research and find this information. Like it's just kind of like, it's here to eat a lot of tokens very cheaply and do this kind of like token heavy, but simple cognitively tasks for you. That's why, for example, when your Cloud Code goes through your code days and research these things, it actually uses Haiku Subagent. When you use this fetch, for example, you give it a URL and you like read this, actually Haiku reads the HTML, summarizes it and passes it back to Opus that just reads the summary of that, which uses less tokens, for example. So that's why you use it. So that is kind of like good for like, if you have a plan that's made, for example, like if you use the plan mode with Opus, so that's pretty good at executing that plan. And actually there is a secret mode in Cloud Code. If you type, if you're on a terminal, you type slash model Opus plan, it will use Opus for the plan mode and Sonet for the execution. And that will reduce your token usage a lot. Like your limits are going to go a lot slower. Another thing I didn't say as well is on skills, you can also select thinking level now. So you can say thinking level, high, medium, low. And again, if it's like a simple upload, etc., like probably low or medium is enough. And that will use a lot less thinking tokens. So usually like kind of like switching your model is going to be 90% of the job. As you know, you're disconnect MCPs. But actually I think by default now, MCPs don't eat tokens. I'm not 100% sure, but I think so. If you're not sure, you can type slash context on your Cloud Code. I will show you a little graph of how many tokens are eaten by your MCPs. But I think now a tool search is turned on by default, which means it doesn't load the tools unless it needs them, which already is optimized. There's a few other simple things you can do, like not using a very long chat. A lot of people don't realize that when you're sending the 10th reply in a chat, you're essentially sending the entire chat history in every message you send from that point. So the further you go, the exponentially more context you're using. So if you don't need that previous stuff, just starting your chat and it's going to burn through context much, much slower. Yeah, also your Cloud.MD is passed with every message that you send. So if your Cloud.MD is very heavy, that is a shit ton of tokens. So you're better at having a very lean Cloud.MD that links to documentation that is read when it's needed. So the point of the Cloud.MD is to almost be a table of content. And that's also how I build my skills. Like my scaled.MD files now, they're very small and they just link to documentation. It's like, oh, when you need to do this, you go read this, etc. And just read when it needs and it doesn't load the context when you don't need. Because it's quite easy to blow these things as well. I have some skills that I had to refactor that had like 700 lines of instructions. And I went down to like 60 of those. I went down to like 60 by refactor. I made a refactor of skills. And it just kind of like split it off. The whole purpose of skills was exactly this though. Like because it doesn't load all of the context. Yeah. So it only has a small amount of context using in the beginning and then more as needed. So yeah, I think like even a while ago, these companies are thinking about the context crisis for one of our word. Because this is going to, I think, come up a lot of people quite quickly. And I don't think too many businesses are prepared for it because, yeah, I don't see too many people talking about how to use this stuff efficiently. Now they're worried about adoption, but like as costs rise, the people start worrying about efficiency. It's not in people's mind yet, but if you start thinking about it as your building seems now, you can already like be prepared for the future. That's why I've been re-optimizing a lot of skills that we're building and I'm like a lot more conscious of token. Now when I test my skills, I actually have a counter of how many tokens are used. And then when I evaluate the changes, I can see how it's optimized because again, that matters. I think eventually a lot of this stuff will be under the hood and happen automatically. I'm thinking like in a car, you're not controlling exact amount of fuel going through. You just press eco mode or it's on by default. I think it would be a consultant thing. Actually, I do think like there's going to be a whole consultant layer around the AI market and people will pay people to do this as well. Like AI will be able to do it to some extent, but you know, it's the same as like you're still listening to this podcast. If you're listening to this podcast, AI could not give you all the information we're giving you right now. And there's still a kind of a layer of experience. And that will be here, which is a good thing for humans. It shows you you're not fully replaced yet, but you'll be able to build skills that help it, etc. Well, you're still driving it as a human for now at least. And if you want to learn more about Claude Code, if you're not using it at the moment, especially, if you head on over to authorityhacker.com for slash learn Claude Code, we've put a page on our new site and there's a video there showing how we use Claude Code in our business for things like sales, marketing, operations, with some of the exact processes and outputs like shown in the video there. I know sometimes a lot of this can feel a bit theoretical on a podcast. So if you want to see what this stuff looks like in action, go on over to authorityhacker.com for slash learn Claude Code. And there's a free video there and you can check that out. Just want to say I just show some of the skills that we talked about actually. So people will be able to do that. Awesome. Let's talk now about M dash. Now, when you sent this to me yesterday on April 1st, I was like, this has to be in April full like calling it M dash, you know, like with AI. Yeah, like, you know, chat. GVT was infamous for having like thousands of M dashes every time it wrote a sentence. But no, Cloudflare has built an open source content management system. They're calling it the spiritual successor to WordPress. And it certainly looks a lot like WordPress. I saw a screenshot on Yoast. I have it here. You can see it. On Yoast. I think that's how you pronounce his actual name in batch Yoast. Yoast, who runs, but it's about J-O-O-S-T. He runs Yoast, Y-O-A-S-T, SEO, like the SEO plugin for WordPress. And it looked exactly like WordPress. I thought it was WordPress. Oh, that's his old WordPress. Yeah, look at my screen. This is it. This is M dash. If you're on the YouTube version, look at your phone, look at your screen right now. It's, I mean, it looks like WordPress. Yeah. They're clearly going for that market and want to make sure that people feel comfortable with it. But it's really nothing like WordPress at all. It's completely built from the ground up. It's serverless, which we'll talk about in just a minute. And everything you can do in there is designed so that AI can execute that thing for you. So it's really built for agents to make websites. So what's your first impression of this? And are you planning to switch to it? First of all, the WordPress interface is like 20 plus years old. And they just did the exact same thing. And if you look at competitors to WordPress, there are much better CMS interfaces. What's really nice is it's much faster. You can see it's on a new thing. It's like it doesn't load on. You don't have this little spinning wheel for like five seconds when you click on things. You don't want to press. It's still very basic to me. Even the text editor, this is far away from Gutenberg. Gutenberg is a better text editor, arguably. This is kind of like old WordPress, but it's familiar at the same time. It's like we talk to a lot of people and they're like, oh, we used to WordPress. Our clients are used to WordPress. They want to use WordPress. And it's like, they don't want to change. And so for these people, I guess that's kind of like a killer feature. It does feel like very rough around the edges. And this is version 0.1. But there's a lot of very good ideas also under the hood. So yeah, would I build a new site on it? No. Reason wise, because a lot of these platforms are not as good as they are. These projects are abandoned eventually. And so there's a risk that this will be abandoned as well. At the same time, Cloudflare bought Astro that it's built on. So they own their underlying platform. And their main competitor, Versaul, runs Next.js, which is their own kind of like website framework, app framework. And so it's pretty evident that Cloudflare wants to make Astro that direct competitor. So investing in the ecosystem makes sense. So provided there is ongoing effort going into this, this is going to be a banger. But there's still a large chance this could also fail because of low adoption rate. So I have quite a good feeling about this, to be honest. I think they really captured the what's been missing with all this kind of like AI web design stuff. Is that, yes, okay, the web designer, the agency making the website, it's useful for them. But the end user, it was like, give them no interface and no control. And it was scary. Yeah, but like imagine not scary. No, imagine like the homepage, like there's no way this would fit. Like our homepage, how do you fit this into this format here? Like you don't, you cannot, it will not work. This will work for like simple blog posts with like headlines and images and stuff. It's nice. They have like a... Which for 90% of people is hit enough. But like the flip side of that is like not having an interface where, you know, a junior employee can go in and edit some text easily without, you know, connecting a bunch of shit is like, it's a problem. But for example, the junior employee would not be able to edit the homepage here, for example, because of the structure. So it's like, you still need to kind of go through your... And they may not be editing that stuff very often. Like, yeah, fine. I'm just saying there is limitations. So it's like, it will work for a blog, it will work for like a simple content system. But then you kind of still need to... Like you can make sections that I've seen as well. So you can like, you know, they have this about the auto thing and you can edit it. But like it's still extremely, extremely basic. So I think the editor part needs to be developed. But again, with AI, I don't think it would be very difficult to build something like Gutenberg, for example. Like I think they can do it quite quickly if they want to. I think the thing with Gutenberg as well is it's also like a learning curve to it. And there's a... It creates a lot of technical friction. There's no simple way to edit a website with a complicated layout though. That's the problem. Exactly. Unless you have an actual builder like Ed Aemento or something where you would click and edit, that it doesn't exist. Here's the thing though. Most solution, all solutions up until this point, they're like, well, we can't do it exclusively for like the normies. Otherwise we'll have like wicks or something like, you know, you can't do anything in. Or the other end, it's like, you know, full vibe coded, you do everything from cloud code. And then like all the normies, the idiots like me can't figure out how to do shit. Hey, don't talk so little about yourself. You're doing it now. So, so, well, I mean, we're all idiots according to you as earlier in this podcast. That's no way. Just because you can use the pro model. Come on. I'm joking. I'm joking. So like the compromise is bad for everyone in a way. Like the Gutenberg compromise is bad for everyone because like it's bad for the experienced people. It's bad for the basic people, but at least everyone can kind of do everything. What they're doing here is they're saying, well, forget about that middle ground. Let's make something that's perfect for the newbies for the idiots. And it's also perfect for the smart site builders, web designers, and they work in conjunction together. So it's almost like there's two solutions, two ways to do everything in one. There's like the human aspect, the AI aspect. And honestly, I think they're on for an absolute banger with this. And I can see why they've done it as well because it's all built on this like serverless architecture where I mean, you need to correct me if I'm wrong here. But as I understand, they only serve you the site when you request it. So it's not like there's a computer always running, which is why so much cheaper. Even if no one's coming to it, which is what they'll give you the whole thing for free, basically, by the way, they have an importer for your website. So you can just put your WordPress site URL and import your content, by the way. I don't know how well it works. But yeah, it doesn't need the import system. It says at the top there, import post pages and custom post types from WordPress. I'm not sure if that's like the full site design and all the databases and other stuff as well. But if it's just import all your WordPress content, that alone is like it's a very nice thing to have. And like costs nothing to play around with this either. The one thing I want to push back on you is like, oh, it's for normal music, et cetera. Well, to install it, you need to clone a GitHub repository right now. So I don't know. I understand that. But you know, when I first installed WordPress, I had to fire up an FTP server to upload it. And like, you don't have to do that anymore. There will be a result eventually. Like there will be some kind of result and it would be the, I agree. But for now, it's like, it's not really verges 0.1 and the vision is great. Now let's see if it actually realizes itself. Now for the serverless stuff, yeah, that's kind of the thing. It's like WordPress sites are built on ancient internet principles. And that's the problem. It's like, that's why you pay so much for a WordPress hosting because you have to keep up a server that is strong enough to handle the maximum amount of traffic that you potentially will get on your website. And that's why you have to pay quite, and it's always on whether someone is on it or not. Whereas the system of workers on Cloudflare, it starts only when people open the page. Your website is shut down if nobody's on your website. And when people request a URL, the server starts extremely fast during the load time, basically, and will be able to serve the page. And they have a whole caching system, et cetera, that they can essentially speed it up. And the idea is like, the power of your server will scale up and down based on how much traffic is on your website right now, which in total uses just a lot less resources than a WordPress site would. And as a result, as I say, now if you run, you cannot run in static, but you could probably get like 60,000-70,000 views per month on a site like this and not pay anything for the hosting and still be faster than a premium WordPress hosting, which I know a lot of people care about page speed. Well, that will be better, basically. Yeah, WordPress is always going to have this problem. It's 23 years old now. It's older than some people listening to this podcast right now. And if you can imagine, your computer or people listen to us. Computer or your phone was like 23 years ago. I know it's obviously WordPress been updated from then, but there's some kind of architectural things that hold it back. And this is a kind of like, hey, let's have a fresh start and we've reimagined it. So I'm very excited for this. There are also some interesting things. So for example, the problem with WordPress is security. Why? Because when you install a plugin, the plugin has access to everything on your website and can hack your database and can fuck it up, et cetera. Or as the way they run the plugins here is almost like their own server, like their own worker, and only kind of has an interface with the website that's kind of indirect. And in that way, you cannot like poison your website, which increases security massively. And the other thing that they have that's built in that's actually pretty cool is there's a new kind of like, process. I can't remember xx 402, which is protocol for chatbots to pay for using the content on your site. It's kind of like automatic. So again, I don't think OpenAI is paying for any content right now, but if the internet was ready to start actually paying content creators for using their content in these kind of things, and that became a standard, then this is already built in. And as a content creator, you'll be able to start automatically collecting revenue for your content being used by LLM companies, which is, you know, it's kind of future proof, basically. Of course, if you're cloudflare, then all this is built on their infrastructure. So if it takes off, then a lot more people instead of using WordPress hosting, which I know like a lot of it is actually goes back to work by far anyway, but not really because they don't have like permanent servers. So it's going to be more AWS, etc. Like cloudflare is only stateless servers pretty much. They're going after a market that pays like website market that a lot of them pay for these servers that they don't run. And then they want to eat that. So like if you're the VP engine, for example, I honestly think it's a slam dunk. I think they have everything they need to take over. Like of course, there's a critical mass factor. It needs adoption. It needs, you know, people to support it. But Yoast, the Yoast SEO creator, is already moving his site towards this. So, you know, that's like, that's one. Yeah, it has built in SEO as well and all that stuff. You don't need plugins for that. They did like also you can manage your redirects from it. You can and it's all this cloudflare behind the scenes. So it's extremely efficient and fast. Yeah, it could be excellent. It needs refinement. The good news is if you're building a website using Astro, like we teach on AXRator, you can probably migrate to this very, very easily. Like CloudCloud can probably do it for you or Codex. And that will give you a bit of a familiar feel on how you manage your content. Well, at the same time, the codebase is fully editable by your chatbot. And you'll be able to vibe code away, unlike you can do it on WordPress. Like WordPress doesn't really allow you to do that. Yeah, it's worth pointing out as well. Astro, it's the same framework which we teach in the AI accelerator. So and it's what we've used to build a new version of authorityhacker.com, which is live now. So you can go check that out. And specifically, you can go to authorityhacker.com forward slash AI accelerator. If you want to learn more about our courses in community inside the accelerator, including the new course we released last week on vibe coding, a site exactly like this. Using this framework. Yeah, it's super impressive what you can do with zero design skills, basically now. Like you don't need a designer, you don't need a developer to build a really nice looking site. And that's a pretty big change compared to even six months ago. So yeah, check that out authorityhacker.com forward slash AI accelerator. Let's talk now though about ads. Because a couple months ago, the start of February, open AI rolled out ads. And there was some controversy. Claude paid for a bunch of Super Bowl ads that were kind of poking fun at it, saying, you know, it was going to implying indirectly that they were going to mess up the replies. They knew they weren't, they knew they were having separate disclosed ad sections, but it caused some friction, it caused some controversy there. We're now two months into it. And open AI is saying that it has $100 million in annual recurring revenue from 600 advertisers. I don't know exactly how that works. I think they have a contract. That's what it means. If they've taken the two months and like annualized that or something. Yeah, it's really not going to be an annual recurring contract, like locked in for multiple years. Because like this is all new. I think it's, yeah. They've thrown out this $100 million number to say like, yeah, there's a lot of money going into this. What's really important though for small business owners to know is that this month, they are launching self-serve, which means that anybody, any business owner, can go and run ads inside chat GPT now. It's, I believe just in the US, Canada, Australia, and New Zealand. I don't think Europe is getting ads yet. I could be wrong on that. They also released some numbers as well, which I think were quite interesting. So we've heard the $60 CP ads. So the cost per 1000 impressions are being $60, which is three times higher than something like display ads. But they've now released the click through rate data. So zero point 91% CTR, which means that just under one in 100 people are clicking on. Very good. If you compare that to a Google ad, like the top of a search query, they get about 6.4%. So quite a bit more. But if you compare it to a display ad that you'd see on a website, that is much lower about 0.35%. So it's about just under triple the CTR of a display ad, which is, it's not nothing. So, you know, if you're treating this like a display ad, but, you know, it has more of the properties. It has more of the properties of a Google ad, I think, because they're matching the intent much better because you're actually typing stuff and they have context about what you're looking for. And I think that's really the angle they're trying to play up here. So it's going to be really interesting to see. And as always, in when there's any new kind of ad inventory gets released, first users are going to get a bunch of cheap inventory. So, you know, it pays to be an early adopter sometimes in this. I think that's the biggest opportunity of the year for marketers, actually, to figure this out. It's like, there will be lots of demand for it. It will be much more predictable than trying to rank organically, etc. And that's going to work. Now, the 0.35 on display ads from Google, like you need to put that in brackets because the advertiser, like the website decides what the ad displays, and quite often the ads are poorly displayed. Sometimes they're not even in the viewport. Like they load a little bit before you scroll to them, right? So that they don't load as they appear. It depends which site. If you go on any, like, tabloid newspaper these days, you get about 6,000 ads before you see any content. I understand. But this is all the ads. And so therefore, there's a lot of inventory that's poorly placed, etc. I think it's like, it's okay, but like 0.91 CTR, unless they really charge a lot per impression. Like, I don't even think it pays for the costs, basically. I don't think it's very good. And now we'll explain why OpenAA is still pivoting towards enterprise, even though they have this kind of like ad play running for B2C, basically. I still think it will help dampen, like it will reduce the cost to them. And they hope they can improve it probably. They will change the layout slowly to like increase CTRs and whatever. I actually want to show the layout for a second because I don't think we've shown that here. So here's, we can't get it here in the UK, but here's an article on Wired where they shared it. And it's, I mean, it's pretty separated, right? You got the end of your prompt here and then, you know, the icons, that almost acts as a little barrier. And then there's this Uber ad in there. And it's like, you know, it's not super in your face, but you know, you certainly notice that it's got like image and stuff in there. So it's a bit more visual than like, you know, Google search result ad. I think they can improve the CTR massively by like moving the buttons below the ad, by moving the Uber sponsored label below the ad as well. It's fine. They can still show it where it's below. It almost feels like part of the answer, but it's kind of like sponsored. Like the point is like, they can probably double the CTR. So they have lots of levels to do that. Here's another one for target. Same thing. That's like, there's a really small image of what is that? Like a toothpaker? No, brooch. I don't even know what that is. Crochet hook section. It's really written. What's that for? It's for, what is this for? Like knitting, you know? Ah, okay. Right. Yeah. Not, not something I'm super familiar with, but it's like a toothbrush or something. But anyway, that's kind of part of the problem, right? You know, I'm looking at this on a monitor and like, I can barely see what that is. They need to make it bigger. My phone, that would be tiny. They need to make those things bigger. So they have room. This is the clean version. Like they will probably cover the API costs to this. Yeah. They always start with like, you know, ads, which are not too in your face. And then they're like, oh, we can make more money if we just make it more in your face. And then they dial that up. Yeah, basically. So that's coming. And I think it's an opportunity. I think if your market are looking for like a new direction or whatever on your service, like as soon as there's the stuff, so stuff, if you like show these screenshots to people, be like, hey, you know, AI is the next big thing. You want your business to show up there, et cetera. I'll set that for you. I'm specializing in that. It's going to be a killer. And look, we talk a lot about AI SEO or GEO and how, you know, we're dubious sometimes about like the actual impact of that. But whatever our opinion, like every business is asking for at the moment. So every SEO agency, marketing agency is trying to serve it to them. I think we're in for that same wave here on the ad side this year. And I really haven't seen any ad agencies talking about this. Like it really feels like this is a wave, which not too many people are kind of getting ahead of. It's much more predictable. Like, you know, you pay per thousand impressions anyway. Like you will not be charged if your ad doesn't show. So it's like it's much easier to get results to people. I'll provide you, you can afford the price. Yeah, I think it's going to be a banger. I think there will be also a new advertising, like making it feel native, et cetera. You're going to have to think a little bit like how these chatbots work and what the experience is for the user. And so there will be all these kind of competitive edges you can build. I'm always a bit dubious about companies that advertise certain types of products. So, okay, you know, if it's like Uber or Target or something like that, you know, it's not too bad. So I was looking at the list of companies or rather product types that were being advertised that the wired editor had found. And it's, you know, simple stuff like dog food, travel, printers, hotels, productivity software, credit cards, streaming services. And I'm like, okay, fine. That's kind of what I expected. But there were a couple in there that thought, oh, that's kind of interesting. And one of them was skincare articles. Now that really got my spidey sense. Like a feed post? Well, I don't know. Like we couldn't see how you can share the exact one. But a skincare article, why would a company share content about it? Like is it because- It's a pre-sales learning page. It's like it looks like an article and it's like, hey, three ways to make your skin look 20 years younger. There's a picture of a doctor on it. Then you sign up for some subscription to a diet pill or something. And it's hard to get out of. And yeah. So I'm like, surely they're going to prevent that stuff. You know, for many years, Facebook and Mehta was like really, really against any kind of like dodgy advertising. And then slowly they realized that those people paying more money and you know, they could make more money off the back of it. So they kind of loosened their restrictions a little bit. So it'd be interesting to see where- I have a different take on this. Roll this up. My take is they're going to like, so that OpenAir is building this super app right now, right? They say they're building some kind of new app. In my opinion, ChatGPT is going to stay kind of like the B2C app. And this is not moving. It's too difficult to move users to that. And they're going to build like a new app for professionals that will kind of be bringing the some features of ChatGPT with codecs, with the browser and more advanced users of their subscriptions. And probably there was split the subscriptions too. And my guess is that ChatGPT- This is all speculation, by the way. Is ChatGPT is going to go full trash cheer, max monetization, because anyway, there's no like, these companies have understood that the B2C market is not very lucrative in there anyway. So that's a way for them to separate and make like a high quality professional service on one side. And then full like less monetized however we want on the success of ChatGPT, which is still massive. And they will have this trashy ads and they will have all of that. And the professional app will have no ads and it will be subscription only basically. And they're going to split these two experiences. Because even Gemini for example, like for sure the experience has gone down quite a bit. Like the big B2C Chat apps, they try to make money, they haven't figured it out. They need to lower their cost and increase their revenue. And that's probably going to be like lower quality models, combined with a lot of ads, and we don't give a shit how we make money. And then the professional market, which is like premium experience, SOTA models, no ads, you pay a bunch of money per month basically. And they need to split these two. I can see this happening because 80% of B2C users, they might not even have heard of Claude or Gemini. ChatGPT is AI to them and they're kind of locked in, whether one model is slightly better than the other or not. The business users, they're much more sensitive about those things, about those features. So it makes sense to split them. Although splitting isn't really making a super app, it's like it's kind of like, for me a super app is like, everything is in one. Yeah, yeah. But there will be a professional app and a B2C app. And the reason I say that as well, is it's been leaked on social media, on error messages. And one of them was, for example, you cannot use ChatGPT, you are on a Codex plan. And so that says that this is splitting up. And so I can imagine, ChatGPT, GPT Plus will still exist. And then that will be filled with us if you don't pay. And they will have the $9 plan as well, something like that. It will be the three tiers. And you'll have Codex that will start at $20 and go all the way to $200 and store the increase together with Entropiq. And the consumer app, I'm telling you, they will advertise a lot of shit to grow their revenue for the IPO. That's my guess. All right. I want to finish up by talking about something Gemini's done recently. And they are making it super easy for you to port all of your ChatGPT or Claude data into Gemini. So basically reducing the friction from switching. Friction being things like, your Chat history, your memories, anything which you associate with, oh, OK, yeah, this is my workspace that I've used before. So would you want to tell us a bit about how this works? It is just a prompt. It's like they copied what Claude did. Claude did it when there was the Department of War drama with OpenAir and Claude and all of that. And Claude made that thing. It's like they gave you a prompt that you would paste in ChatGPT. ChatGPT would spit out an answer like with most of your memories. And you would just paste that back into Claude. And it would just add updates memory based on that. And it would feel like there's some kind of continuity between your ChatGPT experience and your Claude experience. Well, Gemini decided to vibe code a copy of that. Basically, it took screenshots probably and they were like, build me this. And it's just a prompt and then you just send it. Now they also allow you to import Chat history. And you need to also think that Gemini has this personal intelligence in the US only where it uses like your Gmail, it uses your Google photos, etc. And so the idea is like if you also import your Chat history, it will just make it feel a lot more personal. So like Gemini in Europe, like how we have it, there's literally almost no personalization option. You can just kind of like save things manually if you want. But in the US, yeah, that's going to feel a lot more personal and better. And again, Gemini is also the default voice assistant on Android, which, funnily enough, is not very used in the US, but very used outside of the US. And so, yeah, it kind of makes sense for a lot. For B2C, I think it makes sense to switch. I don't think ChatGPT, if you don't pay for it, is better than Gemini. If you pay, you know, it's debatable. I would say Gemini has gone down a lot as a Chatbot. So yeah. Yeah, I basically stopped using Gemini for like two, three months. Gemini 3.1 Pro was not a good model. Like it's just a little bit more rigid. And it's like, it just feels like a last gen model compared to like Opus or even GPT 5.4, which are really good. Their image model is obviously still good, but it just seems like falling back a little bit, to be honest. Gemini 3 Flash is very good in when you start paying API prices, when you use the API. The value of Gemini 3 Flash is excellent, because it's like a little bit below Sonet, but it's three times cheaper. So it's quite handy if you want to like do summaries, or if you want to process lots of data and not spend a ton of money. Like actually one of our members today was like, oh, I'm paying like 50 bucks per day on Sonet for all my stuff. I'm like, switch to Gemini 3 Flash, and you'll probably pay like seven bucks. So like, yeah, the Flash models are great value from Google, but the frontier is always kind of a bit worse. And it's like, it's just not there right now for coding. It's like, don't bother. Gemini ACLI sucks. So yeah. Okay, awesome. Any final words of wisdom before we wrap up? No, just probably good luck to everyone who has to upgrade their plan, as we predicted, because of the limits of all the things are down. You still are like, we can complain all we want. They're still losing a lot of money on us. So it's like, the reality is we should feel good about it. And you will regret two days limits, probably by this time next year. So let's just enjoy it while it lasts. The good news is the frontier is going to get better anyway. So it's like, you'll get more for your money. But yeah, yeah, get started. And it's another good reason to subscribe to this podcast, because as this develops over the next year, so we're going to continue to produce a lot of content around how to use AI in a more efficient, more token efficient way. So if you're a small business owner, then yeah, definitely subscribe. So you don't miss out on that content. So thanks everyone for listening to this episode of the show. Please do us a favor. Stop right now and leave us a review on whichever podcast app you're listening to this episode on. Those five star reviews really help drive engagement. For it's one of the only things that helps drive podcast engagement on those platforms these days. So it really does help us out massively. And if you know someone that might like to listen to this podcast, do us a favor and send that to them now. We really appreciate that. So we'll see you next week for another episode of the Authority Hacker podcast.