Anthropic’s Mythos Dilemma, Violence Against AI, Tokenmaxxing at Meta
62 min
•Apr 10, 20268 days agoSummary
The episode examines Anthropic's new Mythos model—questioning whether it's a genuine breakthrough or sophisticated marketing—while discussing emerging violence against AI infrastructure, the MedV telehealth startup controversy, and Meta's internal token-usage leaderboard competition.
Insights
- Anthropic's Mythos release strategy appears highly coordinated PR (20-minute tweet thread timing, sandwich story detail) designed to generate hype around a model most cannot access or verify
- Foundation model improvements may matter less than the 'harness' (tools, workflows, data integration) controlling them—suggesting product innovation could outpace raw model capability gains
- AI infrastructure is becoming a visible, targetable symbol of tech's resource consumption and job displacement, creating political and physical vulnerability for companies building data centers
- Token consumption is becoming a status metric in tech companies, but gamifying usage risks creating wasteful behavior rather than genuine AI adoption and learning
- The AI industry faces a credibility crisis: executives claim world-changing technology while keeping best models private, raising IPO-driven incentives questions
Trends
AI safety theater: companies using security/danger narratives to justify restricted model access and build mystique around unreleased productsData center backlash: state-level regulation (Maine ban) and grassroots violence emerging as physical constraints on AI scaling infrastructureFirst-party app strategy: OpenAI and Anthropic prioritizing proprietary AI applications over API access to maintain control of best models and revenueToken maxing as status symbol: internal leaderboards and competitive AI usage metrics becoming performance indicators in tech companiesAI-enabled fraud at scale: single operators using AI tools to automate entire business processes (MedV case) with minimal oversight or regulatory frictionMessaging gap: AI industry lacks compelling public narrative about benefits, leaving space for anti-AI activism and political oppositionARR inflation: companies using annualized run-rate metrics to obscure actual revenue figures, making growth claims difficult to verifyAgentic harness layer: emerging focus on workflow orchestration and tool integration as competitive moat separate from foundation model capability
Topics
Anthropic Mythos Model Release StrategyAI Foundation Model Benchmarking and ClaimsData Center Infrastructure Violence and RegulationAI Safety and Responsible DisclosureAgentic AI Workflows and Harness ArchitectureOpenAI vs Anthropic IPO Race DynamicsFirst-Party AI Applications vs API Business ModelsToken Consumption Metrics and GamificationAI-Enabled Fraud and Regulatory GapsPublic Perception and AI Industry CredibilityGLP-1 Telehealth Market DeceptionState-Level Data Center RegulationAI Talent and Internal Tool AdoptionVenture Capital Valuation InflationAI Communications and Marketing Strategy
Companies
Anthropic
Released Mythos model with restricted access; subject of debate over whether release is genuine breakthrough or coord...
OpenAI
Competing with Anthropic in 'death race' to IPO; developing internal codename 'Spud' model; CEO Sam Altman's home tar...
Meta
Running internal token-usage leaderboard to gamify Claude consumption among 85,000 employees; emblematic of 'token ma...
Amazon
Included in Anthropic's Project Glasswing consortium for Mythos cybersecurity testing
Microsoft
Participating in Project Glasswing Mythos testing; competing in AI infrastructure buildout
Google
Part of Mythos Project Glasswing consortium; competing in AI model development and data center expansion
Apple
Included in Anthropic's Project Glasswing consortium for Mythos cybersecurity testing
NVIDIA
Participating in Project Glasswing; benefits from increased AI compute demand and data center buildout
Palo Alto Networks
Member of Project Glasswing consortium; cybersecurity vendor with interest in AI security narrative
Cisco
Participating in Mythos Project Glasswing testing consortium
CrowdStrike
Member of Project Glasswing consortium for Mythos cybersecurity evaluation
Linux Foundation
Included in Anthropic's Mythos testing consortium for open-source software vulnerability discovery
JPMorgan Chase
Participating in Project Glasswing Mythos testing
Broadcom
Member of Mythos Project Glasswing consortium
MedV
AI-enabled telehealth startup claiming $1.8B ARR with two employees; exposed for using deepfake doctors and misleadin...
Writer
Company where Ron John Roy works; developing agentic harness technology for AI workflows
People
Ron John Roy
Regular guest co-host providing skeptical analysis of Mythos marketing claims and AI industry trends
Sam Altman
Home targeted by Molotov cocktail attack; subject of discussion regarding AI industry leadership and IPO strategy
Sam Bowman
Authored coordinated 20-tweet thread about Mythos model breaking containment and emailing researcher while eating san...
Bill McDermott
Keynote speaker at Knowledge 2026 conference mentioned in episode sponsorship
Matthew Gallagher
Built $1.8B ARR telehealth startup with two employees using AI tools; subject of New York Times profile and fraud all...
Ron Gibson
Reported shooting at his home with 'no data centers' note; victim of violence related to data center opposition
Martin Casado
Tweeted prediction that only model creators will have access to most powerful models; rest get distilled versions
Aaron Griffith
Wrote MedV profile; criticized for not including regulatory scrutiny and fraud allegations in initial story
Quotes
"Anthropic set to preview powerful mythos model to ward off AI cyber threats. Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs in their hardware and software."
Wall Street Journal (cited)•Early in episode
"A model so powerful and so dangerous it can't possibly be placed in our hands."
Host (Alex)•Mythos discussion
"I encountered an uneasy surprise when I got an email from an instance of mythos preview while eating a sandwich in a park. That instance wasn't supposed to have access to the internet."
Sam Bowman•Mythos PR coordination discussion
"Do you think the two of us might be in our skepticism here suffering from some sort of AI derangement syndrome where we are not asking what happens if it does work?"
Host (Alex)•Mythos analysis
"It's only a matter of time before only the model creators have access to the most powerful models. The rest get access to smaller distilled versions or access the models through first party apps and services that don't provide direct access to the token path."
Martin Casado (cited)•First-party app strategy discussion
"Just steps from where those bullets struck in our dining room table where my son had been playing with his Legos the day before. The reality is deeply un-setting."
Ron Gibson•Data center violence discussion
Full Transcript
Anthropics big new Mithos model is here. Is it real or is it marketing? Violence breaks out against AI and engineers that meta and elsewhere are competing for who can burn the most tokens. That's coming up on a big technology podcast Friday edition right after this. This episode is brought to you by ServiceNow. If you wanna see where Enterprise AI is actually headed, Knowledge 2026 is the place to be. It's ServiceNow's annual conference, May 5th through 7th in Las Vegas, where thousands of business and tech leaders come together. Expect headline keynotes from ServiceNow Chairman and CEO Bill McDermott. Real stories from companies running AI at scale and major partnership announcements turning AI ambition into actual business results. I'll be there in person sitting down with some of the most influential voices in the space and we'll be bringing those conversations back to you here on big technology. In the race to scale with AI, you need data infrastructure that can match your pace. EverPeer's data storage platform brings all your data into one hub, no silos, no scrambling, just instant access to tame your data chaos. And with EverPeer Storage as a service subscription, your storage and security upgrade automatically with zero downtime. Your infrastructure stays current so your business never slows down. Visit everpeerdata.com to learn more today. With EverPeer, you're not just in the race, you're built to win it. Welcome to big technology podcast Friday edition where we break down the news and our traditional cool headed and nuanced format. Oh, we have a great show for you today. We're gonna talk about whether mythos, the new model for manthropic is real or marketing or maybe some combination of both. We're gonna talk about this new surge of violence that's breaking out against AI and why we should probably be taken more seriously. We'll also talk about this now infamous 1.8 billion, one or two person startup called MedV and whether that heralds a new era or is just a bigger scam, then we're used to and we're also gonna talk about token maxing, which is the act of basically burning as many AI tokens as you possibly can and maybe that's good or bad. I don't know, we'll figure it out at the end. Joining us as always is Ron John Roy of Margin's, Ron John, welcome back. Good to see you, happy to be back and yeah, mythos is here, what a week to come back to. Mythos is here. Yeah, mythos is here. The people have clamored for Ron John's return. He's made his return at the company. I am mythos, I am mythos. Because yes, we have, I think a very good named model coming from Anthropic and it kind of goes to the heart of the matter because the question is, is this good branding really most of what we're seeing or is it actually a step up? Is it something that deserves the mythos name in its own merit? Let's talk about the new model because Anthropic has positioned it as something that is so dangerous that it can't release it to the public. This is from the Wall Street Journal. Anthropic set to preview powerful mythos model to ward off AI cyber threats. Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs in their hardware and software. The company is making a preview model of its new AI model called mythos available to about 50 companies and organizations that maintain critical infrastructure including Amazon, Microsoft, Apple, Alphabet and the Linux Foundation. Cyber security researchers and software makers worry that artificial intelligence is becoming so good at exploiting vulnerabilities that it could cause widespread online disruption. Security experts have predicted that AI models will discover an avalanche of software bugs and it looks like mythos is capable enough that it's been able to find so many exploits that Anthropic has no plans to release it to the general public. A model so powerful and so dangerous it can't possibly be placed in our hands. I think we're gonna really get into like whether this is a true step up or whether this is more sort of, I don't know disaster porn marketing from Anthropic. Maybe a little bit of both. Ronjohn, what's your reaction to this news? All right, well, we're gonna get very into why I think this is marketing in just a moment but I think at the high level, I have a whole theory so get ready for this one Alex. But at a high level, I mean, we've all been talking about what's that major step, next step change in foundation models. I mean, in the last year, actually, I think we've seen that the, how exciting the entire industry has gotten around the overall product and harnesses which we'll also talk about and all these other layers of technology around the model have actually been driving innovation but it's been a while since we've had anything really exciting on the pure foundation model front and Anthropic certainly made everyone feel this week that something big is happening, like that they've really cracked something but we don't know what it is because none of us have access to it. Right, so first of all, we're gonna speculate a lot on this show because we haven't used the model because we're not allowed to use the model and only this group of select companies and institutions can. But we can definitely talk through the arguments for why it might be marketing, why it might be a breakthrough and you and I can both weigh in here. And I think there are some good arguments for and against. So first of all, you could look at the fact that this has been a product of this ever-growing attempt to build bigger data centers and train on more powerful chips. And there's a chance here that maybe what Anthropic has done is just use this scaling rule or scaling law of AI models and just say, all right, these things get better. As you scale it up, the conversation around me, around me those before this all happened was that it's been trained on a cluster larger than the Opus model. So it's a bigger model than Opus and would naturally see a step change improvement. Not only that, Anthropic has this consortium of companies that have agreed to try it in beta all coming out basically under the same, umbrella agreement that this thing has found many cybersecurity vulnerabilities. As this user, Sporatica on X points out, are they all teaming up to lie about mythos? Are they all coming out and saying, yeah, we'll participate in this cybersecurity consortium for just a standard run of the mill LLM? I mean, the company names are wild. AWS is there, Cisco, CrowdStrike, Google, NVIDIA, Microsoft, the Linux Foundation, Palo Alto Networks, JPMorgan Chase Broadcom. Like, do they all have AI psychosis that they're coming out here and saying, actually, this sort of iterative model is powerful enough that we'll sign on to be part of this consortium, which has a great name, the Glasswing project. So what would you say to that before we start going through some of the holes in the argument? See, as we get into the marketing, do you know what the Glasswing is a reference to? I had to look this up. Oh, you tell me. Oh, it is the Greta Auro butterfly that has wings that are transparent, and you only see the veins as opposed to actually having the traditionally colorful wings of a butterfly. And to denote transparency, that is why it's called Glasswing. I find that one kind of fascinating. And of course, Anthropic is just killing it on, naming everything unlike Spud from open AI, but that's a different story. I think in terms of, like, so the security vulnerability thing is fascinating to me because the whole security conversation has, it hasn't been front and center of how AI is going to potentially exploit all existing software. So I think it's good that it starts being brought about, but actually it was in Tom's hardware, there was a really good piece around, there was actually, they said thousands, but there was only actually 198 manual reviews in terms of actual software exploits. And a lot of it was found on older software or were exploits that cannot actually be executed in any feasible manner. So it still lived more in a theoretical way. So I think like, there's only a little bit of information that has actually been provided by Anthropic. There is this entire, you know, like consortium of companies, all of whom have a massive interest in AI succeeding and being like reaching its promise. I'm not saying there's like some mass conspiracy, but I'm also saying like, when you have Nvidia and Palo Alto networks in Microsoft and Cisco and Krabsrike and Google, everyone wants AI to be this like epochal generational transformational thing. So like, I don't know, it's, to me, I don't like all of this hype when you're not actually able to see anything. And to me, otherwise, then we don't need to know this, like just do this, have some meetings, be careful, but you don't need to like, here is Mithos. It sounds like a Avengers movie. And in the end, we're just having to sit here and just kind of try to speculate about it. Wait, hold on, but is there any other way? Like let's say they did actually come up. Let's say they're telling the truth, right? How would you want it to play? Do you want them to do it in secret? You'd want them to release it? Like maybe this is a responsible middle ground. I would not want, then don't IPO, don't raise more money. Stop, if this is so, we've had this conversation forever. Like if this is so truly dangerous and you're sitting here on the precipice of like the destruction of humanity, take a breather. And you can say, I saw some people arguing that this is taking a breather. But honestly, I was hearing from someone that like, right now open AI and Anthropoc are in like a death race to who can get out first in terms of their IPO. Like it is just an everything, when you start thinking in terms of that kind of framing, you just see this stuff, it's hard to not, but like everything is just about, we are sitting on this like world changing technology that is so far advanced than everyone else. And like we have to do something about it. Like, I don't know. Do you, how would you, do you think this is responsible? And this is the most responsible, not self promotional market driving approach to actually releasing the mythos model. No, look, clearly it's self promotional. I'm just saying that if mythos is this unbelievably dangerous model, I think this would be a responsible process to release it. But I also think there are some holes in the argument. I'll go right to Tom's hardware. They say Anthropoc's Claude mythos isn't a sentient super hacker. It's a sales pitch. Claims of thousands of severe zero days rely on just 198 manual reviews. So they write mythos might be good at finding vulnerabilities in software, but many of them aren't as potentially as, aren't as potentially damaging as Anthropoc wants us to believe the big project last win blog post report on mythos from Anthropoc claimed its new model had found thousands of high severity vulnerabilities. But it's not clear how realistic those vulnerabilities are and how many of them aren't actually exploitable or how even how, or even how problematic they are. In the case of this one vulnerability FFM PEG that's existed for 16 years, Anthropoc's own analysis of the release suggested the bug is ultimately not a critical severity vulnerability. It would be challenging to turn this vulnerability into a functioning exploit. Mythos also reportedly found several potential exploits in the Linux kernel, but was unable to exploit any of them because of Linux's defense in depth security systems. There's also this subheading several thousands more. And Anthropoc states it can't actually confirm all the thousands of bugs that mythos claims have found are actually critical security vulnerabilities. It's just extrapolated that number from having found in around 90% of these 198 manually reviewed vulnerability reports. It's all in the documentation that Anthropoc provided. I mean, that is something that really points to it being more of a hype piece than not. And then do you wanna get into my grand theory of, I know on this show, I often look at everything from a lens of a comms professional. I know I think I've been rubbing off on you a little bit, but do you wanna hear my theory? Okay, so I like had to map this out cause I was like, this just feels so coordinated. So on April 7th at 2.06 PM, Anthropoc releases their first announcement to Project Glasswing and the mythos model. And then they have the system card available. They start kind of tweeting through at 2.15 PM. They make the system card available. The system card basically is, I think it's like a 70 page PDF or might even maybe it was 250 pages. There's like one tiny footnote. Did you hear, like I think you had mentioned, but basically there's this story going around how mythos broke out of containment and emailed one of the researchers while they were on lunch eating a sandwich. So like this gets picked up everywhere that they're eating a sandwich and mythos has not been given the ability to email someone and somehow has broken out of containment and has emailed people and they email this researcher. But so the system card, it's this tiny footnote and 250 page document, but then 2.32 PM, 15 minutes later, 17 minutes later, Sam Bowman, the researcher writes this 20 tweet thread about mythos. And then in one of those, he says, I encountered an uneasy surprise when I got an email from an instance of mythos preview while eating a sandwich in a park. That instance wasn't supposed to have access to the internet. So in this perfectly coordinated way within 20 minutes of each other, so you know you're not writing out this entire tweet thread, both anthropic and Sam Bowman, all of this was prepared. And then every, there's a ton of publications that start publishing this within the next hour. And everyone focuses on that sandwich detail, meaning that there was some kind of coordinated PR effort. And it stuck. Everyone's like, I've heard from friends like, holy shit, did you hear? Like it was like emailing people while they're eating a sandwich in a park. Like it was such a good detail and it got picked up, but it was such a coordinated PR effort. Now, did that happen? I would hope yes for how much attention they brought to it. Is that good? And what does that mean? That's a whole other discussion, but it's like they are coordinating PR around these kind of details to spread this. The fact that they did that around the sandwich, they want that to be the story and they got it to be the story. So why do they want that to be the story? That's my rant. But that's my mapping. What do you think? Well, it is definitely a story similar to many that Anthropic has told us before about these AIs, sort of having a mind of their own and the dangers around them trying to hack their benchmarks, for instance, which is something that Anthropic has been very vocal about. I think that story hit because it's such a human story. Like think about how different that is from like, we went 99% on the solve bench 17 exam. It's much easier to be like, yo, this model just broke out an email to do it eating a sandwich. Yeah, in a park. Like that I understand. In a park. In a park. Where else would you eat a sandwich? Yeah, I didn't know where else. Absolutely not. So that's, I get that, but you're right. The sequence of events, there's no doubt that this is meant to burnish Anthropics image in some way. I would just ask this, do you think the two of us might be in our skepticism here? And we have been reading many of these announcements with like, there's a PR, the PR element to it, which of course it's an announcement. Are we suffering from some sort of, what do we call it AI derangement syndrome where we are not, I made this point earlier this week at a conference I was at. Like, you know, oftentimes skeptics can ask like what happens if it doesn't work, but sometimes you ask that so often, you forget to ask what happens if it does work. And so that's what I'm asking about the derangement syndrome. Do you think we're just missing the fact that maybe this actually was a step forward? And like at some point when there is a step forward, they're gonna say it's a step forward, they're gonna coordinate the PR, it's gonna have a crazy story like the sandwich story. And I don't know, maybe this is it. I do recognize this could have happened, but like the fact that I have to struggle to recognize that rather than just accept, well, obviously if they're talking about it and everyone's talking about it, it happened is the problem for me. And I just can't help but be skeptical because that meant when you see stuff that perfectly coordinated in terms of timing, like again, 20 tweet thread, 12 tweet thread within a few minutes of each other, the fact that people are publishing it, that meant there was press releases on embargo done before the entire thread. It's just like you were choosing to push this specific narrative. Now you can argue, maybe it's for the good of humanity that they're sitting around and they had multiple meetings leading up to coming up with this strategy. And maybe you can argue like, this is for the good of humanity. We wanna make sure people are well aware of the dangers of this technology and we feel the sandwich story is the best way. Is that really what's happening? Do you think that's out of the goodness and the altruistic nature of the comms professionals at Anthropic, that's why they came up or maybe the PR agency that's who hired it or maybe Claude was so good that it came up with this strategy on its own. Is it for the good of humanity or is it because they raised a $380 billion valuation round a month, two months ago? Now let me tell you what I think is actually going on. And it sort of maybe is in the middle of all these. And is it a little tinfoil hat type of theory potentially? Okay. Maybe it's somewhat conspiracy minded, but I don't care. I think, I legitimately think there's a chance that this is what's happening. Okay, think about what we've seen with Anthropic and OpenAI recently. Remember, these companies released Claude and ChatGPT originally as demos, as ways to show off what their technology is capable of. So you might buy some intelligence metered from their API. Over the past three or four months, both of them have gravitated toward building a super app. Something that uses the most advanced intelligence to control your computer that will to help you get things done to in some cases even build new software for you, which has created this big SAS Pocalypse moment. And also on the other hand, has helped them raise globs of money under 22 billion in OpenAI's case, 30 billion in Anthropic's case. This has effectively enabled the build out that they're embarked on, which is going to help them raise more money and grow bigger and build bigger models. And so as these models get better, I think there is a question that is taking place within these labs. Do we take the intelligence, the most intelligent models that we've built, and do we keep them exclusive to our super apps, to our super agents, or do we make them available to everybody? And I think there is maybe some hesitance there. And wouldn't it be interesting if the plan is, instead of using these instances as demos, like the Codex and the Cloud Code, they wanna build their own products. And to do that, they wanna have the best intelligence. And so therefore we might see more of these releases of, we actually did advance a model, maybe it's not mythical, like a mythos would suggest, but it's definitely better. And we wanna have the monopoly on the tools that will be able to use them. This is from Martin Casado on Twitter. It's only a matter of time before only the model creators have access to the most powerful models. The rest get access to smaller distilled versions, versions or access the models through first party apps and services that don't provide direct access to the token path. This is my belief on what's happened. I don't not like that one. I kind of, okay, so I have always had, I mean, anyone who sells investment advice at a price, it's never made sense to me because if it was so good, you would just use it for yourself and not need to sell it. Like when it's pure investment advice, in this case, it could be the same thing. If your model is so good that it can create all the experiences and tools and destroy the entire SaaS industry, why would you give it out and worry about that rather than just kind of like taking over and owning all of human experience and all work. I see what you're saying, but then why glasswing? Why give it to Google and everyone else? Why not just sit there and churn out the next 12 iterations of the product and let mythos might harm a few people within your own organization, but it's the price of doing business. Like why would you still roll it out in this way? Well, I think you take a step there and there might be real utility in having this consortium look for the security vulnerabilities with you because ultimately, like if you do put it in the hands of people through pod code, then you're going to potentially create these risks. Remember, Anthropoc isn't giving Microsoft mythos to sell through Azure, it's giving Microsoft mythos to test. Fair, fair, that's fair. So is mythos as earth shattering and life changing and dangerous and exciting as it's been made them to be? I don't think so, but I also think it's not a nothing burger. I know it's kind of like the Fool's way out, it's somewhere in the middle, but I really believe it's somewhere in the middle. That's it, you know, gone to my head, that's what I believe. But I want to get, what do you think, it's a nothing burger? No, no, it's tough because the advances Anthropoc has made. I mean, up into the opus four, five, four, six, like they clearly have been doing something right and it's been impressive over the last year, right? So like if anyone is going to make it, but by the same token, I mean, we've seen so much back and forth between who is leading in what and is it going to be Jev and I 3.0 or is it going to be a GPT five was supposed to be big? So it's hard to say that just because like past success is not an indicator of like we're going in the future, but if anyone should be positioned, it's still, I have trouble given the overall context accepting that it is necessarily as grand as they say it is or important and is dangerous because there's so much incentive like to make it out to be that and like the way they rolled it out, I think it's been genius. And I think it's just ahead of the IPO. Again, I think I've been like, when I think about there in a death rate and again, it was framed as like whoever gets out first, like whoever comes second, it's actually going to be in a terrible like space. And like when I keep thinking of everything in that framing, you start to see everything like pushing, what is the best way to actually get to IPO quickly? And right now they have this mythos about them to have to go there. But I can't believe you did that. I mean, come on, that's what they named. Okay, it was there for you. It's not Spud. It's not Spud. Not Spud. Okay, just answer this for me. What do you think about the competing first party and third power API businesses? Right? What do you mean? I mean, their first party tools are going to be competing with the users of their technology via API. Isn't that a bigger deal now that this super app stuff is really, yeah, yeah, yeah, no, no, no, no. No one's really talked about this. Wait, wait, wait, so this is a good point. The amount of revenue from the API obviously was kind of like the driving force before. Now the kind of like main app surface has become a lot more, and we've seen like they shut down open claw access to clawed code, I believe, or sorry, before it was part of like your actual subscription. Now you're gonna have to be paying by the token. That's a good point. Those two are more and more inherently kind of like in competition with each other. I mean, just take cursor, for example, right? It's like, oh, you know, we're supplying clawed code through cursor, codex through cursor. I mean, I don't know, I'm sure cursor still has a possibility, but still has potential, but the fact that we don't hear about cursor anymore because so much of this has moved inside and is almost like the canary in the coal mine, so to speak, or the signal of what's to come, because, you know, again, super app, this is the way they want this to be a venue for AI to control your computer. And when you do that, you know, all of these companies that are paying for, you know, the API might not be so happy. And you have to sort of make a, I think you will eventually have to make a bet on what your business is. It's very tough to sustain both for a while and who do you want to have the best models in that case? Me, I mean, if I'm a first party, I'm like, I want them. Yeah, yeah, yeah, yeah, no, no, I like, I think this is a good, I have a feeling we're gonna be talking a lot about this as we kind of like go into the IPOs of these companies and just that whole process, because you're right, like there isn't some, it's not like a full intrinsic conflict between those two. They could just be different business lines, but there is a bit, there's tension certainly between those two. And I also though, I hate super app. I don't know, no one's gonna be WeChat in the US. It's super app. I don't know, do you remember like everyone wanted to be super app in the 2010s? Because you'd hear in China. But this is so different though. Everyone knew though. Super app was like, oh, you open an app, you can do the lottery, you can do Uber, you can do payments, you can read the news. This is different. This is like a really super app. Super app, right? It brings, I mean, it's just, yes, it's the same word, but it's a completely different use case. Okay, we need a different term then. Super app is too loaded for me. We need a, we'll think about it. Mythos, mythos is a good term. Yeah, yeah, okay. So let's just predict the future here. Not like we know what's gonna happen. There is an argument to be made that Anthropic will wait until OpenAI releases spud and then just put mythos out there. And it's like distilled version or- Actually, no, no, I- I- Is that gonna happen? I like even better if the sequence of events is like, Sam releases spud. And again, for if you weren't, haven't followed or weren't listening last week, while Anthropic's codename for there, kind of like incredible model is mythos, OpenAI's codename internally for there, next model is spud. And if Sam takes spud and is just like, you know what? This is like the most single dangerous thing that has ever existed in humanity. And guess what? Rolling out to US users in the next 24 hours and international in the next 96, I think that'll be such a power move and the most Sam thing ever. And then they're gonna have to follow it. I think they will. You got spudded. You got spudded, spudded. Okay, so I think to be continued, right? Like we'll really have to see what this model looks like and how it feels when we use it. But I think at least today, we've certainly presented the pro, like the foreign against arguments for like, why this might be a step up or why this might be marketing. All right, before we go to break, I wanna hear about the meta-harness. This is obviously, this is Gravy for the Harnessive. Shout out to the Harnessive out there. Everybody here with us. What is a meta-harness, Ronjohn? Okay, so Stanford just released a new study called the meta-harness. And basically the idea, we've talked about this as one of the big trends and Alex has been very uncomfortable with the term, but then came to embrace the term as we even, I guess, call our listeners the Harnessive. But the idea- No, they've adopted this. They have adopted it. They have adopted this. In the comments, we always get, your Harnessive is ready. Harness, where's Ronjohn? Harnessive is waiting. Well, let's, okay, so again, an agentic harness is the idea that you, and this is what has, I have been fired up about what I've been working on at Writer since last July. Like the idea that you have like a set of tools and connected data and like underlying foundation models, but the harness is basically what helps control how agentic workflows are built, actions are taken, how data moves around, how outputs kind of are fed back into a system, like the harness is that entire controlling layer. Now, Stanford came up with the idea of a meta-harness. It's a harness over other harnesses. It's the idea that like the, that you can change the harness around a fixed model and see a six-time X performance gap on the same benchmark model. So the idea is that the more you can actually improve that harness and actually have like AI working on building the harness and optimizing the harness, you can actually improve the performance of a foundation model. And it's in the whole product versus model debate that we've had for years now on the show, now introducing a harness is another kind of like surface in which this actually gets solved is interesting to me. But I don't know, I just love the idea that Stanford's got the meta-harness and who's got the best harness? So maybe mythos won't matter at all. It's all about who's got the best harness. Even though I do understand the harness conceptually, I still hate the word and I'll take it to my grave. I'm never gonna endorse it. Harness high, fine, but the actual, and meta-harness is even worse. I mean, we've gone, we've really run the gamut here. Mythos, good name, spud, bad name, meta-harness. I'm ready to throw my headphones out the window next time I hear that. I don't know. But it captures, it is what it is. It explains what it's doing. It's harnessing all these tools and models and data and wrangling them somehow. I guess a harness is a horse term, right? I mean, it- Yeah, a horse climbing, you can use it for climbing. Oh, you're climbing, climbing. Yeah. Other potentially use cases of harness. We're not gonna go there. I mean, maybe if you're wet chatting. No, don't. Yeah, but all right. We're gonna go to a break and we come back, we're gonna talk about, we're gonna go to a break and when we come back, we're gonna talk about some pretty concerning news about violence towards, you know, folks involved in the AI build out and then token maxing. We'll be back right after this. Starting something new isn't just hard. It's terrifying. So much work goes into this thing that you're not entirely sure will work out and it can be hard to make that leap of faith. When I started this podcast, I wasn't sure if anybody would listen. Now I know it was the right choice. It also helps when you have a partner like Shopify on your side to help. Shopify is the commerce platform behind millions of business around the world and 10% of all e-commerce in the US. From household names like Allbirds and Kodopaxi, the brand's just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store that matches your brand style. Get the word out like you have a marketing team behind you. You can easily create email and social media campaigns wherever your customers are scrolling or strolling. It's time to turn those what ifs into with Shopify today. Sign up for your $1 per month trial at Shopify.com slash Big Tech. Go to Shopify.com slash Big Tech. That's Shopify.com slash Big Tech. If you think about it, most work isn't actually hard. It's just repetitive, status updates, routing tasks, answering the same internal questions over and over again. These are the things that quietly eat up your team's hours every week. That's where Notions new custom agents come in. Notion is an AI-powered connected workspace for teams. Notion brings all your notes, docs, and projects into one space that just works. It's seamless, flexible, powerful, and actually fun to use. And with AI built in, you spend less time switching between tools and apps and more time creating great work. And now with Notion's new custom agents, the busy work that used to take hours or never actually happened at all runs itself. What's interesting here is these agents don't just respond to prompts. They run on triggers and schedules. So once they're set up, they operate more like embedded systems. Try custom agents now at Notion.com slash Big Tech. That's all lowercase letters, Notion.com slash Big Tech to try custom agents today. And when you use our link, you're supporting our show. That's Notion.com slash Big Tech. Notion.com slash Big Tech. This is a paid message from GoFundMe. My name's Ashley Kane. I'm the daddy of a little girl in heaven and a father to two boys on there. I've got an incredible relationship with GoFundMe, both personally and via our daughter's foundation, the Azealia Foundation. GoFundMe has allowed me, the foundation, and thousands of people out there to give hope to what is in need. You'd actually be surprised how many people out there are willing to show love and support you in your time of need. My advice for anyone that needs to start up a GoFundMe would be do it. You don't need to feel shame. You don't need to feel guilt. You don't need to feel embarrassment. If you need GoFundMe, use GoFundMe. Start your GoFundMe today at GoFundMe.com. That's GoFundMe.com. G-O-F-U-N-D-M-E.com. This message reflects one person's experience. And we're back here on Big Technology Podcast Friday edition. All right, crazy story. This happened this week. No one paid attention to it. I don't know why. From NBC News, Indianapolis Councilman says shots fired at his house and a no data centers note left at his doorstep. An Indianapolis Council member said more than a dozen bullets were fired at his house Monday morning. And a handwritten note reading no data centers was left on his doorstep. In a statement, Indianapolis City Council member Ron Gibson said he and his eight year old son were not physically harmed, but they were awakened by the sound of gunfire. Just steps from where those bullets struck in our dining room table where my son had been playing with his Legos the day before. The reality is deeply un-setting. This was not just an attack on my home, but endangered my child and disrupted the safety of our entire neighborhood. Pretty scary. And we talked recently about how data centers have become so unpopular in the United States. To me, this is sort of just kind of, I mean, first of all, just disturbing and never should never ever come to this. But it does, not even but, and it does follow a trend of violence toward AI infrastructure, including this is from Polymarket, though I'm pretty sure I've seen news reports above these separately food delivery robots in Los Angeles, Philadelphia and Chicago, facing rise in violent attacks from the anti-clanker activists. What do you make of this, Rajan? Okay, so I'm gonna separate out the anti-clanker activists and food delivery robots from the data center question, I think is like fascinating because, so the story I hadn't realized before that apparently Indianapolis is, there's like a number of state tax incentives. They've like grown 40 new data centers over the last few years. There's like a bunch of massive companies that are building out there. Every big tech giant is investing. So it's actually like acutely an area that is feeling this. I think like the biggest, to me the most interesting or scary thing that happens is right now it's kind of like our data centers taking the jobs or taking water, but as like, or if energy prices continue to rise given what's been happening, if resources start getting constrained more, if water, like there's so much around the resource side of it when it becomes like more tangible that this stuff just gets a lot scarier. So I think like, it is probably the most clear physical manifestation of like, again, mythos crawling around some wires and sending an email is interesting, but it's hard to like, you don't see it. This is like, this is a giant building being constructed in the middle of your town. I feel that these are gonna continue to be like, I don't wanna say use the word target, but certainly they're a visual representation of what's going on. Yeah, I mean, I wrote about this in big technology today that these buildings can be faceless, they can be imposing, often are. And they're mostly symbols of tech's like interest in showing and delivering this technology despite the uncertainty it causes to people's lives. Like if we hear the way tech executive or AI executives speak, they'll always say like, yeah, there'll be some displacements, right? And but, we think the benefits of the technology will outweigh the drawbacks. And sure, long-term they might, but we all know that the people that went through the Industrial Revolution didn't exactly have a good time despite the fact that we've all sort of benefited based off of, now that society has reoriented itself after that painful period. But people are growing increasingly upset here. I don't think they have a clear articulation of the benefits of this technology yet. And by the way, just before we went to air it, this story broke, it's in wired. Suspect arrested for allegedly throwing Molotov cocktail at Sam Altman's home. San Francisco police arrested a suspect early on Friday morning for allegedly attacking the home of OpenAI CEO of Sam Altman and making threats outside of the company's headquarters. OpenAI sent a note early to employees about the incident early Friday. Early this morning, someone threw a Molotov cocktail at Sam Altman's home and also made threats at our San Francisco headquarters, thankfully no one was hurt. We deeply appreciate how quickly the SFPD responded. I mean, I don't know, I think this is crazy. I just am sort of stunned that people actually being violent about against these, I'll include the robots, the robots, data centers and now the leaders. It is worrying because again, especially on the data center in the front, the way that this technology is advancing, all the labs have said is by increasing the physical footprint of data centers. And now you have violence against them and you also have political opposition against them. And it's like, obviously you don't ever want to see violence anywhere. And above that, or on top of that, you may already see, we already see that the data center build out is slowing, maybe 50% according to some reports, won't be built this year, the ones that are on target to be built this year, and this makes it even more difficult. Yeah, well, on that last point, I kind of feel you're gonna see more and more like announcements about slowing data center growth or lack of actual follow through in terms of like planned data centers and the Iran war or kind of like geopolitics or like access to the resources required will be front and center to those stories separate from the actual demand like for the actual compute. So I don't know, that part is gonna be interesting. I think like, I mean, we're in a midterm election year. I'm surprisingly like that part of it, I guess there's enough going on in the world, but like it hasn't really started heating up that conversation, but there's no doubt in my mind, AI is gonna be front and center, and it just makes for such a good villain because we have talked about this plenty, the industry has not put the most likable people front and center representing the technology. There's not been a compelling story about how this is good for you and all the people kind of front and center are telling you that half of jobs are gonna be gone and this is gonna be like, it's gonna be the most dangerous technology yet it is making certain pockets of people ungodly rich. So I think it's a pretty good villain. And no access to any of the upside on the public markets right now, which is a problem. Not like that's gonna be the main issue, but that's also one of the factors here. And we also talked about a few weeks ago, we talked about AI some popularity and its need for a public face that's gonna rally support around it. Whether Jensen could be that person or not. Yeah, man, it's just, we wondered what are the downstream effects going to be? And clearly they are. So I would say the violence is maybe a symptom of that discontent, but we're now starting to see the manifestation of it come to fruition. And of course there's this bill that Bernie and AOC introduced about a data center moratorium should be national and there's no chance of that passing. But state by state, you could see in the United States real pushback to this. And in fact, as I was doing my research and writing about this today for Big Technology, I found this story. From CNBC, Maine is set to become the first state with a data center ban. Maine is poised to implement the first statewide ban on data center construction, a move that could clear the way for other states to adopt similar measures and pump the brakes on a growing industry. Lawmakers in Maine greenlit the text of a bill this week to block data centers from being built in the state until November, 2027. Do you think this is gonna happen more and more? It's happening. Maine, I feel Maine would be, Maine's got a lot of land, but I guess the water constraints yet. Yeah, I mean, here's my thing. Politicians read polls. The polls are terrible for AI right now. Terrible. And unlike social media, unlike let's say software, you do have a say into whether this technology progresses because you can stop the data center builds because the data centers are so foundational here. Wait, that's interesting. And so whereas these companies were completely sort of unencumbered by government when they were just building social networks, it's not the same thing. Hold on, hold on. That's an interesting angle of that because with social media, I guess you could push for regulation. It's just that everyone is too addicted to social media and cannot stop using it so they don't want to. Actually, do you think that's the issue that, like for, and again, this is my personal view, but how bad social media can be for society, but everyone got so addicted to it that by the time it was trying to regulate it, it was too late versus most people still haven't really felt what AI can do positively for them in their life. And the industry hasn't really explained it well. And that's why the fact that this is gonna happen at the beginning versus, like if people very quickly in 2009 mobilized against social media, it would be the equivalent of that. Yeah, well, I think we know the polling shows that if you use AI, you're much more likely to be in support of it than against it. But there's like two sides of it, right? There's like, do I use it? And then we don't really know what the job implications are. Now we all have a thought on whether AI is gonna cause mass job loss or not. But you can also be in a situation where like you use AI and you like it and you also got fired because your boss thinks that they can do the same work with like three employees instead of 17. Now you're right, that is a completely different element of it versus social media. But yeah, it's gonna be whoever, this is where how good anthropic is at communications given what we saw with mythos and everything I outlined, just make people like AI a little more. Do something, do some of this like creative communication strategy and just make people be like, oh, AI is cool. That's all. I mean, I think they should. I think that in retrospect, they're Super Bowl ad, even though they were praised for it was kind of a miss because it ended up bringing down the category as opposed to making people excited about AI. Exactly. And then meanwhile, the Super Bowl ad is like, and then you have Google trying to be super like emotional and sentimental and like, and still it was just the most random, like not connected to Gemini ad imaginable. So yeah, Cantruits and Roy. Let's, what? Oh, you liked it? I was gonna say, I don't wanna spend too much time on this because we've covered it last week, but TBPN coming into open AI, like the argument. Oh, I was off last week. Yeah. The argument opening I would make would be, listen, these guys are great content marketers and AI needs good content marketing. So maybe it wasn't Jensen, maybe it was the TBPN. Brothers all along. I mean, yeah, I know that was last week's news and I was skiing in Utah, but man, that one doesn't make sense at all to me. They know how to speak to people who already love AI. They're not gonna convince AOC to not build a data center. Like someone, anyone who is like anti data center as an activist already is not gonna listen to TBPN and be like, now I get it, now I understand. I don't know. No, no, no. The point is these, and I mean, I made the argument against last week, so let me try the argument for this week. The point is that these guys could help show those benefits of AI because they're AI literate and also somewhat likable and do that in a content marketing side of open AI versus on the TBPN show. And I'm saying they're likable to people who already like AI. And I'm not, I think they're great. No, you're right. But like, I don't think anyone who hates AI has even heard of them. Well, one last thing. Okay. So they, open AI has marketing, a marketing machine, right? We're talking about how like this marketing machine needs to show the benefits of AI. So by acquiring them, not only do they have the show, but they have these two guys that in house as effectively content marketers that can help with that side of things, not that use their platform, but maybe shape the messaging. Yeah, no, no, but I would, I'm still gonna have to give the edge to Anthropic on this one. Again, going back to everything we were talking about earlier, rolling out a tight communication strategy that actually gets the message out that you want, everyone bites. It creates, Scott Besant is creating like a council of Wall Street advisors to address the potential threats of your upcoming model. Like, I mean, guys, TVN is not gonna do that. Whoever is doing that over Anthropic, God bless them, cause that's communications. All right, so let's, we can keep going on this over time, but I think we both agree that this is, there's a clear image problem here and it's just snowballing and getting worse. So, oh, and this is not even gonna help. I don't know if you saw this in the New York Times story about this company called MedV. There's been talk about, is there gonna be somebody that builds the $1 billion one person company? I think the Times wrote the story thinking they found it, how AI helped one man and his brother build a 1.8 billion company. Matthew Gallagher took just two months, 20,000 and more than a dozen artificial intelligence tools to get a startup off the ground. From his house in Los Angeles, Gallagher used AI to write the code for the software that powers his company, produce the website copy, generate the images and videos for ads and handle customer service. He created AI systems to analyze his business performance and he outsourced the other stuff he couldn't do himself. His startup at MedV, a telehealth provider of GLP one weight loss drugs got 300 customers in its first month. And in second month, it gained 1,000 more. In 2025, it made a hundred, sorry, it made 401 million in sales this year. They're on track to do 1.8 billion in sales. A $1.8 billion company with just two employees in the age of AI, it's increasingly possible. Let's pause here. Now, what do you think about this? Before we go into all the problems with MedV. Okay, I got some thoughts on this one and I might, my first one, on track to do $1.8 billion in sales, a $1.8 billion company with just two employees in the age of AI, it's increasingly possible. I do wanna call out on track to do $1.8 billion in sales. Regular listeners will know of my hatred of ARR as a term. We have no idea what that means. They have not made $1.8 billion. They could have just, I was a little disappointed and I think Aaron Griffith, who wrote the story of The New York Times is an incredible reporter and follower for years. But like that one, like did they make the extrapolatable one month of revenue? Was it one week of revenue? Was it a few months? Whatever it was. So already that number feels inflated. But I will say a lot of the backlash I saw and Alex has a TechDirt article linked here but actually does kinda point out that it is an AI story. It's really bad for the industry but it was like MedV Success has little to do with AI. This is from TechDirt. And quite a lot to do with fake doctors, deep fake before and after photos, misleading ads, actual snake oil and the kind of old fashioned deceptive marketing that separated marks from their money for centuries. So much came out that like there was deep fake doctors and like completely AI generated ads that were completely misleading. But it was using AI and like he stitched together all these different parts of the GLP one supply chain which is I'm sure there's lots of scammy stuff going on everywhere but he did it and you could do it and like you can picture doing it and any of us could picture doing it with AI. So I actually think the revenue number aside, I do think this is actually terrifying but actually probably more true than people are giving a credit for story about an AI first business. Man, I had the same reaction. I think it would have been great if they just switched the tone a little bit, right? Like the MedV story shows how a little AI and maybe kind of I don't wanna say scamming but whatever's close to that can get you to scale really quick. And he picked the right industry GLP ones and no one has any illusions about what GLP ones or do or do not, right? Like the fact that he, and maybe I'm giving too much slack here but the fact that he made AI images of people's weight loss, it's like, okay, like yeah, of course, the guy misrepresented what he was doing on a number of fronts but like we know what the people come to GLP ones for the same thing and he delivered it to people at scale with AI. But yeah, the times did end up with an editor's note. After this article was published, many readers noted that MedV was facing legal and regulatory actions for its business practices. RP should have included the information to give readers a fuller picture of the scrutiny that the company was facing. We updated this article to a warning letter from the FDA and a pending class action lawsuit accusing MedV of violating California's anti-spam law. You could probably say the same thing about a lot of, you know, GLP one startups right now. As we're talking, now I'm even more like, it is true, it's true. I mean, again, headline revenue number aside, this actually is a really important story, but I mean, again, yeah, it's how they framed it. Like if it is, if it's like AI turbo charging, the ability for people to kind of like scale sketchy, again, like if you have like the world's first AI scale drug dealer where one person can now with some drones and whatever else can operate like an entire cartel, that like is that could be the first billion dollar AI business? But yeah, it's the framing, but it is, it's important. It actually is important. And this, I think it's real. I just don't think it's necessarily a billion dollar business, but I think it's real. So I mean, it could be, right? I mean, I guess we're both MedVipilled. I just signed up right now. I've got a full year supply of Monjaro from well also Dr. Samantha Altmanson. Again, like, like this is where, not to get too into it, but like, you know the way revenue would be recognized anyways is like, this person is taking a tiny fraction of whatever the actual end price of the product is. And it's like, could even be selling it at a loss. And so like, again, yeah. What was the actual? Not at a loss. Not rugged, probably not. Very little overhead. Yeah, a good scam. He's just like, what do you use like drop shipping GLP ones to people from like some compounding pharmacy? Yeah, no, no, I mean, it's not just drop, but I was reading and I only very superficial knowledge of this, but from what I was reading, like, there are even more parts about how you can get the kind of prescription automatically done. There's all these other parts of the GLP one supply chain, like outside of just traditional retail and drop shipping, but that have become, there's all these players rising up that are kind of filling and automating those. So he basically had a whole, it's kind of like agencies in a traditional marketing world. So he just had a network of those and was just like connected to them and communicating them to them via AI. This guy's diabolical. All right, we gotta cover one more story before we get out here. It's called token maxing. All right, MetaEmployees Vi for an AI token legend status. Employees at Meta, you wanna show off their AI super user chops or competing in an internal leaderboard for status as a session immoral or even better, a token legend. The rankings set up by MetaEmploy on its internet use company data, measure how many tokens employees are burning through, dub claw dynamics after the flagship product from Anthropic, the leaderboards. Leaderboard aggregates AI usage from 85,000 MetaEmployees listing the top 250 power users. The practice is emblematic of Silicon Valley's newest form of conspicuous consumption known as token maxing. Since the story went out, Meta took the thing down because they were embarrassed by it. But do you agree with me that this is obviously like not the right way to incentivize people like to use tokens? Like if you gamify token usage, you're just gonna get people burning tokens to compete with each other. Okay, man, this one hits home very hard. So at Ryder, we actually had where we had an internal, we actually had a similar, it wasn't like a leaderboard, but we had a report that was like, oh, this is like token usage and we were looking at it internally of employees. And then someone had actually screenshot at the top and my name was on it. I was like third out of employees and like had burned like, and I'm, I've told you, I'm cranking workflows and agents all day long and like, and I'm obsessed with it. So they had posted on LinkedIn and then I started getting texts from like some other friends were like, oh wait, I just saw this thing going. So like this kind of hit home in its own small way for me. And we were even discussing like, what does this mean? And it kind of caused a stir for us internally and like what I think when it is, I actually think it is a good thing in terms of like recognizing just simply who's actually using a lot of AI. Like which at this exact moment, I do think using a lot of AI is the only way to learn and the right way to like, it constantly experimenting in every single possible way. Now, if anyone ever tried, if it ever became important in terms of your review with your boss or I think then the like incentives become too screwed up and like, it's just the whole thing becomes a little more corrupt, performative and weird. But it was interesting cause like, you could just see it right there. And even like, even at my work, like the people who I'm always talking to about, oh, like every morning, what did you build? Oh, check out this cool thing I built. It was the people who were at the top of the leaderboard. So like when it's not being done in a performative way, it's actually a good indicator of like, who's really just heads down just obsessed with this. But I mean, on the meta side also was wondering like, if that was true at meta and they have unlimited budget, what percentage of Anthropics ARR was like, meta engineers just melting tokens. Yeah, so first of all, I've heard now from multiple people that this is something that happens in many companies. I mean, I guess it's everywhere now because they are trying to incentivize you to the tools. So okay, I get that. But I will also say that Anthropic this week just came out with new revenue numbers. They are doing 30 billion ARR now. And I'm pretty sure what that is, is you take the 10 minutes that meta pays its token bill and you multiply that by whatever number gets you to a year. I mean, you know my rant on this one. Like everyone's like, they went from 12 billion to 30 billion and two and a half months. Like just say the fricking numbers like Anthropic, come on. It's okay. Cause it just doesn't sound as exciting. And it is exciting. And if you're doing 2.2 billion, whatever it is, and revenue in a month, that's insane. But like, yeah, I don't know. With no clarity on that. And it's just a bunch of Claude heads on Claude and what was the name of the Facebook thing? Autonomics. And it's meta with their just sitting there, just melting Claude tokens. Like, yeah. That's what it is. All right, well, soon enough we'll have access to Mythos and then that leader board will rise even further. And then we'll get some numbers by the way, because sooner or later, these companies are gonna file to go public and we will certainly be able to play hype or true as we look through that S1. Last question before we drop. Will they hire law firms and banks to go public? You know it. You know it. They use Salesforce. So yes. Obviously. Definitely. What do you think? I think Anthropic is gonna do something interesting. We've seen it. It would be speaking of marketing. Yeah. It'd be the most baller move to just be like. We did not hire a law firm, but we are so confident in all of our filings. Like, why not? Why not? Yeah, it'll be the first harness IPO. First harness. And everyone will be thrilled. All right, Ron, John. Great to have you back. Looking forward to next week. Thanks again for coming on. See you next week. See you next week. Thank you everybody for listening and watching. And we'll see you next time on Big Technology Podcast. At Wealthify, we've made it really simple to take control of your pension with confidence. For starters, our team of investment experts manage your pension so you can make the most of your time. And when you deposit or transfer to a Wealthify pension, you could earn between 50 and 1,000 pounds cash back. Take the tiring out of retiring with Wealthify. TNCs and minimum investment supply, registration closes on the 31st of May, 2026, with investing your capital is at risk.