TBPN

Big Tech to Pay for Power, Anthropic Abandons Safety, the Adoption Paradox | Diet TBPN

31 min
Feb 26, 2026about 2 months ago
Listen to Episode
Summary

The episode explores the disconnect between AI adoption statistics and reality, with 80% of firms reporting no productivity impact despite widespread usage. It also covers Trump's announcement requiring tech companies to build their own power plants for data centers, and Anthropic's decision to soften safety policies to remain competitive.

Insights
  • AI adoption measurement is fundamentally flawed because many users don't realize they're using AI features embedded in existing software
  • There's a significant gap between Silicon Valley expectations of AI job displacement and what business leaders actually believe will happen
  • The concentration of venture funding in AI labs is unprecedented, with a few companies raising half of all available capital
  • Energy costs from data centers are becoming the primary public concern about AI, outweighing job displacement fears
  • Safety-focused AI companies are abandoning their principles when faced with competitive pressure
Trends
AI adoption paradox: high usage but low perceived value among business leadersShift from AI safety focus to competitive positioning in the industryTech companies being required to build independent power infrastructureMassive concentration of venture capital in AI companies staying private longerGrowing public concern about AI's impact on energy costs and infrastructureJailbreaking AI models becoming a profitable business opportunityWar gaming simulations showing concerning AI decision-making patternsStablecoin companies making strategic investments in AI infrastructure
Companies
Anthropic
Softened AI safety policies to stay competitive, involved in government data breach
Stripe
Referenced for real AI spending data contradicting low adoption survey results
OpenAI
Mentioned as raising $100 billion and targeting $1 trillion IPO valuation
SpaceX
Targeting $1.5 trillion valuation for potential largest IPO in history
Tesla
Referenced in comparison to new electric supercar beating Roadster to market
Perplexity
Launched new Computer product unifying AI capabilities into one system
Tether
Made $200 million strategic investment in WAP at $1.6 billion valuation
Meta
Planning stablecoin comeback according to Mark Zuckerberg
Nvidia
Earnings day referenced as major market event
Toast
Example of AI features embedded in existing business software
People
John Collison
Stripe executive quoted saying 'no one wants a refund on their tokens'
Donald Trump
Announced ratepayer protection pledge requiring tech companies to build own power plants
Dario Amodei
Anthropic CEO whose essays on AI safety and authoritarian governments were referenced
Mark Zuckerberg
Planning Meta's stablecoin comeback initiative
Rob Wiblin
Podcast host who interviewed guest about AI alignment being dangerous
Max Harms
Guest arguing that aligning AI to human values is actively dangerous
Quotes
"No one wants a refund on their tokens. Everyone is using AI. The spend is increasing."
John Collison
"We're telling the major tech companies that they have the obligation to provide for their own power needs."
Donald Trump
"80% of firms reported that AI was having no impact on their productivity or employment."
"It's a bull market in yarn spinning, folks. Get ready, get out the yarn and start spinning."
Full Transcript
4 Speakers
Speaker A

So I was nerding out about this Fed paper because when you told John Collison 80% of businesses are getting no value from AI, I'm glad he wasn't here in person because he was about to throw down.

0:02

Speaker B

He was about to open up a can of whoop.

0:15

Speaker A

It was about to be a bar fight in the Cheeky pub.

0:18

Speaker B

In the cheeky.

0:20

Speaker A

In the Guinness pub. No, seriously, it was a great question because I think we all agree that like AI adoption is real, it's valuable, it's happening. But it is a very interesting statistic and I think it's a mistake for tech people to like dismiss this stat because of where it's coming from. Like, it's not coming from some like doomer anti AI blogger who's going for clicks. Like, this is the National Bureau for Economic Research. This is a research paper that could be circulated, probably will be circulated within the Fed. And I think that it's already getting quoted by the New York Times in that.com bubble AI bubble piece. And I, and I'm just, I'm thinking it through. Like this could be something where you see Fed policy or government legislation that sort of mismatched with what is actually happening in reality. And so we should go through some of the, some of the stats to actually break this down because the headline is 80% of firms reported that AI was having no impact on their productivity or employment. And that's actually like a misquote. Like what they mean by that is that it's not shaping their hiring plan yet they actually are using AI. And so basically this stat comes from this survey from the National Bureau of Economic Research. And it's pretty interesting because a lot of the polls that you see online are online surveys. They run some digital ads and they say, are you a CFO of a company? We don't really care what company. We'll pay you $10 to take this quick survey.

0:21

Speaker B

And what kind of people want to make $10?

1:56

Speaker A

A lot of liars. There's a lot of liars out there who say, I am absolutely a CFO and please send that Amazon gift card right my way. And so for this one, they actually did the work. They called up and ID verified and then also reality check the position. So if you say, yeah, I'm the chief Pirate Officer, I'm the ninja hero, whatever, you got some fake title there, you're out of the survey. So they did some reality checking and they pulled together 6,000 of these business leaders across firms that are domiciled in the US uk, Germany and Australia. The line from John Collison that has been sort of going viral, that was. He dropped it on sources. I think he said it to us too. It's a good line. No one wants a refund on their tokens. Everyone is using AI. The spend is increasing.

1:58

Speaker B

Although I'm sure some CEOs heard that and thought, I kind of do want a refund.

2:43

Speaker A

I'd love a refund.

2:48

Speaker B

I had one team member go absolutely Haywire and spend 50 grand.

2:49

Speaker A

He's one shot. He claims that he rebuilt our entire erp, but I fired it up and it didn't even have HTTPs. What's going on?

2:54

Speaker B

The Mac Mini wasn't even plugged in.

3:02

Speaker A

Yeah, the Mac Mini wasn't even plugged in. He was just chatting. But clearly there is a disconnect between the stripe data is very real, the value creation is very real, the revenue is very real at the labs. But when just random joe schmo CFO CEOs get a call from the Feds, they say, like, yeah, we're not really getting that much value out of AI. And so the questions that you need to dig into. There's actually four key findings that the one headline that the New York Times is pushing is this 80% number. 80% report little impact or no impact on employment or productivity. But there's actually a bunch of positive signals. There's a bunch of mixed signals in here. So 70% of firms actively use AI, and particularly younger, more productive firms. Second, while over 2/3 of top executives regularly use AI, their average use is only 1.5 hours per week. And one quarter of executives report no AI use at all. Not for.

3:03

Speaker B

Why would I need that? I have a telephone.

4:07

Speaker A

The last major finding that we should touch on is firms predict sizable impacts over the next three years. Forecasting AI will boost productivity. Sizable impacts. Productivity increase 1.4%, which is like, it's very sizable if you're an economic researcher, but it's not particularly sizable if you're in, like, the fast takeoff scenario. Measuring AI adoption is a mess. Many people use AI without even knowing that they're using AI because it's buried deeper in SaaS products that they already daily drive. Like, if you're just, I run a coffee shop and I'm using TOAST for, you know, payment processing. Like, there's probably some AI features in there already. And when you go to, you know, type in, okay, we're adding a new cinnamon roll to the, to the, you know, the menu, there's probably a button now that just Says like, do you want to just generate an image of a cinnamon roll? You could upload one still. That's probably a feature that already exists. But like, we could also just generate one for you and you can probably click that. But you're not like, oh yeah, I'm an AI power user. Just because, like you happen to use Toast and TOAST happened to have implemented some genai feature that like, you haven't really dug into yet. So some AI isn't even detectable. You could be talking to a customer support agent on the phone that is AI generated and not be able to tell. We talked about that. That airline interaction that got something 100,000 likes.

4:09

Speaker B

And Grace, the woman that had the interaction, came into the chat yesterday and said it was real.

5:27

Speaker A

Yeah, it was real. Yeah.

5:33

Speaker B

Maneuvered the clanker.

5:34

Speaker A

Yeah. But still think about like, she's clearly on X intact, like very AI aware. There are probably tons of people out there that are, are saying, oh yeah, my job, you know, every once in a while I have to call this service. And now the person that picks up is like responding pretty quickly. But I haven't noticed, they haven't noticed that they're actually interacting with AI or using AI in some capacity.

5:35

Speaker B

Yeah. I still think there's room for a research firm focused entirely on diffusion. So if you had a group of 10 to 20 people that were spending all their time talking to business owners and executives, operators, and getting a sense of how they're actually using this stuff, I think you could put together some really compelling reports around it that would be pretty useful to everyone from AI companies to Wall Street.

5:57

Speaker A

Yeah. Adoption max after cluster max and inference max, they had to rename it. Apparently semi analysis can't use max for some reason. So they do. Inference max is now Inference X. And everyone was saying you need to just change it to inference mock, which would have been amazing. But Inference X obviously has a much more professional tone to it. What does it mean to actually adopt AI? That's very vague. This paper defines it pretty broadly. So machine learning for data processing. So that doesn't even necessarily mean LLMs. That just means ML, which has been around for a very long time. Text generation using LLMs. That's what we think of as ChatGPT, visual content creation. So diffusion models, but also robotics and autonomous vehicles. And there's another. And there's a category just for other. And firms can select multiple. And so if you selected yes on any of those, you go in the bucket of AI adopter. And 78% of firms in the United States said yes, they are using AI by this definition. And you can also dig in further. So text generation using LLMs is the single most common use case at about 41% of firms. So flip that around. 59% of firms aren't even using LLMs for text generation or proofreading. But again there's a lot of companies where it's like yeah, we don't generate a lot of text. Across the four countries that were surveyed, 69% of firms totally said they currently use. I think Australia was behind a little bit, dragging that down. Only 75% of firms expect to be using AI technology sometime over the next three years.

6:23

Speaker B

Tyler is going to have a heart attack.

7:55

Speaker A

We're going to bump that up to 75% and this is like, this is weird data. And I, you can jump in with your pushback whatever you want. But my point is not that they're right, I think that they're wrong to predict this. I think that the AI adoption will be very steep and very dramatic. But I just think it's important to recognize that this is a paper that people will be citing. This is a paper that will shape policy. This is a paper that reveals some misconception about the impact AI is having in firms.

7:57

Speaker C

Yeah, I still think it's so hard to actually quantify this.

8:25

Speaker A

The perception I think still does matter because I think that there's a little bit of like potential self referentialness here where firms see, oh like AI adoption's low, I don't need to go and figure out how to adopt it. And so that's something that I'm also like keeping keeping an eye on. The biggest thing was there was a massive divergent in the expected employment impact. So basically 63% of firms still expect no impact from AI. And that just completely goes against everything everyone's saying in Silicon Valley. So there's still a lot of optimism among managers that AI will create more opportunities and new jobs, even if some jobs become obsolete. My read on this data is that the tech talking point about 50% of white collar work going away is not a broadly held belief among average business leaders. Now they might be wrong. I do think AI progress is pacing way ahead of public expectations and most managers are months behind when it comes to understanding frontier competitors capabilities. The bigger takeaway for me is just that the survey may be somewhere self reinforcing. I close by thinking about like the nature of polling and how do you actually get stronger data on, on AI adoption. And I was thinking back to the presidential cycle. So during the during the presidential election, pollsters would call people sort of at random, and they would ask them, who are you voting for? And a lot of people would say they'd lie or they wouldn't say, or they wouldn't pick up the phone if they were voting for a particular candidate. And so the polling numbers did not wind up matching the final election results very closely. And so there was this story about neighbor polling, which was more effective, where instead of calling someone and asking them, who are you voting for? The pollster calls and asks, who do you think your neighbors are voting for? Who's more popular in your community? Who's more popular on your city block, on your street? And that wound up sort of removing the revealed preference, stated preference, and it wound up increasing accuracy. And so I'd like to see a survey of AI adoption using this technique. Anyway, we should watch a little bit of a clip from the State of the Union because Donald Trump addressed some of the energy production questions with regard to, like how hyperscalers will be offsetting the impacts.

8:29

Speaker D

Many Americans are also concerned that energy demand from AI data centers could unfairly drive up their electric utility bills. Tonight, I'm pleased to announce that I have negotiated the new ratepayer protection pledge. You know what that is? We're telling the major tech companies that they have the obligation to provide for their own power needs. They can build their own power plants as part of their factory so that no one's prices will go up. And in many cases, prices of electricity will go down for the community and very substantially down. This is a unique strategy never used in this country before. We have an old grid. It could never handle the kind of numbers, the amount of electricity that's needed. So I'm telling them they can build their own plant, they're going to produce their own electricity. It will ensure the company's ability to get electricity while at the same time lowering prices of electricity for you and could be very substantial for all of you cities and towns. You're going to see some good things happen over the next number of years.

10:51

Speaker A

What's your reaction to that?

11:49

Speaker B

I think it's a good start. I don't know that it will quell any of the fears around data centers. Just given that people kind of see the potential for this massive structure going up. They have so much fear about it. And again, I think it's clearly going to be necessary to continue to build data centers and heavily populated areas.

11:51

Speaker A

But how would you rank the fears currently? Because I've put my energy bill goes up and that puts pressure on my income and ability to live my life at pretty much the top. And then the water thing felt secondary, but also important. And then there's the existential fear of the doom and apocalypse. There's also job displacement. And then there's also just like, I don't like the slop.

12:12

Speaker B

I would rank it on electricity bill going up as pain today.

12:44

Speaker A

And it's so real.

12:48

Speaker B

There's fear and it's easy to imagine. And then there's fear around the job loss narrative that is sort of secondary. Yeah. And opposing a data center in your local area feels like a way to have some agency around that, like, overall kind of like, job loss concern.

12:49

Speaker A

Yeah. AI is going to get blamed even if there's an. Even if, like, tariffs drive high unemployment. Like, if people lose their jobs, like, AI is going to be a scapegoat and it's going to be used both by executives to say it's the perfect

13:06

Speaker B

scapegoat for executives and for people frustrated with the job market.

13:22

Speaker A

Yeah. Yeah. It's like, oh, my business isn't doing poorly right now. I'm laying off people because I'm getting so much benefit from AI. The stock should actually go up. We're more efficient. There's going to be a lot of that. But it does feel like it's a little bit early. Whereas the, like, there are a lot of people that just can hold up their power bill and show you year over year increases. And if that goes away and people don't feel that anymore and they don't have that evidence to share, I think that take gets debunked pretty quickly.

13:26

Speaker C

I would say. I mostly disagree with the idea that, like, rising energy prices is the main, like, reason to be against AI. The rational thing to do then is say, like, okay, before you build a data center, my community, you have to build a power plant. So then my energy prices go down. Yeah, no one's doing that. If you look at, like, protests.

13:55

Speaker A

Yeah.

14:10

Speaker C

They're not saying, please build a power plant first. They're saying, like, it's going to destroy the environment or the water stuff, or you're going to take all the jobs because it's going to, like, we need

14:10

Speaker A

to send you to that New Jersey, New New Brunswick protest. Build the nuclear power plant first.

14:20

Speaker C

So I think it's much more on, like, basically job loss of, like, oh, the AI is stealing the IP of Disney or whatever.

14:25

Speaker A

Anyway, happy Nvidia Day to all who celebrate. Except the Bears. Forget them. Says, take him, he's getting fired. Up for Nvidia earnings. It's gonna be a fun one today.

14:33

Speaker B

So now the real news.

14:43

Speaker A

This has been destroyed.

14:45

Speaker B

This is tearing up the timeline. A new Guinness world record. And I want to ask John if this. If you think this should actually count.

14:45

Speaker A

What is this?

14:54

Speaker B

This is a CH hypercar going for the. I've never heard of the Drift ever.

14:56

Speaker A

That is incredible.

15:01

Speaker B

But here's the thing. He doesn't. He doesn't actually pull out of it, does he?

15:02

Speaker A

Just crash.

15:07

Speaker B

Kind of just U turns. It's like a really fast U turn.

15:08

Speaker A

I think this counts as a drift. That's definitely drifting. U turning counts as you saw. If you saw that car going by, you'd be like, wow, that's drifting. Hyptech ssr, formerly Hyper ssr, is a high performance, all electric two door supercar. I mean, this is crazy. This is out before the Tesla Roadster. We've never seen a two door supercar. Electric supercar. 1225 horsepower goes from 0 to 60 in 1.9 seconds. And it set the Guinness book of world records for the fastest electric car drift at 213km per hour, which is really, really insane.

15:10

Speaker B

I feel like you have to actually stay in the turn and not do a U turn.

15:46

Speaker A

What do you mean, stay in the turn?

15:52

Speaker C

Yeah, I don't think it counts.

15:53

Speaker A

You don't think it counts?

15:54

Speaker C

For what it's worth, I don't think that counts.

15:55

Speaker B

Theoretically, if you were drifting, I think of drifting, it's. You're drifting around a corner, around a turn, and if you were to drift and spin out during the drift, then that doesn't. If you were doing. If somebody was doing that on a track, you'd be like, you didn't drift around the corner. You spun out.

15:56

Speaker A

Yeah. Okay. Okay. Yeah. The top comments is fastest spin out. That's a power slide at best buyer. Whoever called this drifting. That's not drifting. That's losing control. Yeah. Yes. The chat does not. Does not like the Drift. The fake drift. Call Guinness Book of world Records again. Reset. Reset.

16:13

Speaker B

Completely stolen Drift. Valor Palmer is sharing something from compound the research from their annual meeting.

16:32

Speaker A

Yes.

16:43

Speaker B

They're showing dollars invested in the top 10 companies versus the other percentage as a percent of overall funding. So you can see there's just heavy, heavy, heavy concentration in a few names.

16:44

Speaker A

Is this.

16:55

Speaker B

I'd say overall, this is.

16:56

Speaker A

Or is this. Is this CO2?

16:57

Speaker B

No, this is. Oh, the source is CO2. CO2 is part of.

16:58

Speaker A

Okay.

17:02

Speaker B

They are part of. I would say driving this data part

17:02

Speaker A

of the problem, part of the opportunity. I mean that's what happens in companies.

17:05

Speaker B

So much of this is about the AI labs just raising more money than any private.

17:08

Speaker A

It's never happened before. Have ever $200 billion venture as a class in a good year will do like 400 billion. And across OpenAI at 100 billion. 30 for anthropic, 20 for xai. Then you have a bunch of Neo Labs all picking up a billion each. You very quickly get to a few companies raising half of all the money. And that's shown here. It's an incredible amount of concentration. I think a lot of it is due to companies staying private this long. The idea Facebook went public at. What was Bill Gurley saying? He's saying Amazon went public sub a billion dollars. When Facebook went public like 60 billion, it was like, whoa, crazy. They waited way too long and now it's like multiple trillion dollar companies are still private, which is just an incredible capital sink. So I don't know, should you even put those in the same bucket? Are they even venture bets at this point? If any venture capital fund is putting that in their venture bucket at this point, it feels ridiculous. Compared to growth scale, I mean, you're bigger than probably 90% of the S and P. It's a completely different business.

17:13

Speaker B

Some kind of relevant data. We're about to witness three of the largest IPOs in history. SpaceX is targeting one and a half trillion. OpenAI aims for one trillion. Anthropic is valued at 380 billion. Combined, they're at 2.9 trillion in potential market cap. The scale is unprecedented. But the real problem isn't the market cap, it's the float. Typical IPOs offer 15 to 25% of their shares to the public markets. This creates enough liquidity for price discovery while allowing founders and early investors to maintain control. Facebook floated 15% at the 60 billion that you mentioned and actually traded down pretty much immediately. Right. Google floated 19%. Alibaba floated 15% at 15% float. Here's what these three IPOs would require. SpaceX would be 300 billion or 225 billion opening. I'd be 150 billion. Anthropic would be 57 billion.

18:24

Speaker A

That's a lot of smackaroos.

19:13

Speaker B

He was. Yeah, a lot of dollars. He was comparing that to Saudi aramco, Alibaba and SoftBank, which were combined at the IPO. I believe Saudi Aramco raised 29 billion at a $1.7 trillion market cap. So he's making the case you can't really model how the public markets will absorb these companies off of Saudi Aramco. Even though from a sort of top line market cap standpoint it is a good proxy. We'll see what the labs end up doing. They are obviously wildly capital intensive businesses and you can imagine they raise quite a bit more than the Aramcos or the Alibabas.

19:15

Speaker A

Saudi Aramco was such a wild ride. I feel like they were trying to

20:02

Speaker B

IPO for like a the San Francisco company.

20:07

Speaker A

It is, yeah, founded in, in California. Like I remember hearing Saudi Aramco IPO rumors in like 2015. I think it actually kicked off in 2016. They finally got out in 2019. It was, I mean it was the largest IPO ever. There were like a million investment banks attached, like going all over the world marshaling capital.

20:09

Speaker B

Anthropic dials back AI safety commitments Competitive pressure prompts it to pivot away from a more cautious stance. Anthropic, the company known for its devotion to safety, is scaling back that commitment. The company said Tuesday it is softening its core safety policy to stay competitive with other AI labs. Anthropic previously paused development work on on its model if it could be classified as dangerous, but it said it would end that practice if a comparable or superior model was released by a competitor. Given that they are at the frontier, that kind of opens them up to I would say perpetually kind of avoiding some of their, some of their prior policies.

20:28

Speaker A

Sure, sure, sure.

21:05

Speaker B

The changes are a dramatic shift from two and a half years ago when the guardrails Anthropic published guiding the development and testing of its new models established the company is one of the most safety conscious players in the space. Anthropic faces intense competition from rivals which regularly release cutting edge models. It's also locked in a battle with the Defense Department over how its clawed suite are used after it told the Pentagon it couldn't be used for domestic surveillance or autonomous lethal activities. Anthropic said the safety policy change is an update based on the speed of AI's development and a lack of federal AI regulations. Anthropic, which started as a AI safety research lab, has battled the Trump admin by advocating for state and federal rules on model transparency and guardrails. The admin has of course sought to curb states ability to regulate AI. The obvious sort of criticism here would be that you were heavily focused on safety when you were far away from I would say leading in AI and so switching up now that like there's actually switching up on their day one, switching up on their day one now that there's. Now that there's real competition, are they forgetting where they feels a little self serving?

21:06

Speaker A

It's possible the money changed them.

22:17

Speaker B

It's possible the money changed them. It's possible they always plan to switch up on their day one once they got, once they got to the level they're at now.

22:19

Speaker C

It could just be that they realized alignment's pretty easy and we don't need

22:28

Speaker A

to worry about that.

22:32

Speaker B

Yes. What's this new study that's showing? They were doing some war game simulation and almost every model was choosing to drop nukes.

22:33

Speaker A

Really? That's crazy. That's not good. I don't like that at all. The interesting impetus of this line around the policy environment has shifted towards prioritizing AI competitiveness and economic growth. While safety oriented discussions have yet to gain meaningful traction at the federal level, I still feel like there's a lack of communication around what safety orientation at the federal level means. Like, yes, okay, we'll pass the bill that says the AI can't kill everyone. Like, obviously everyone supports that. But like, what does it actually mean in practice?

22:42

Speaker B

Because I think part of why.

23:15

Speaker A

Oh, that's dangerous. Means million things to different people.

23:16

Speaker B

Yeah. Part of why I think it's fascinating is they've been pushing for regulation, as much regulation as possible, seemingly. Yeah. And they're kind of saying, hey, we're not getting what we want. So now we're, now we're just, we're not even going to play by the own set of rules that we created for ourself because we just want to compete and win.

23:19

Speaker A

Yeah. I mean, like going back to the protesters, there are protesters that would say like, like training on intellectual property is dangerous. It's dangerous to my career as a writer. It's dangerous to my career as an illustrator. And so like this, this question, like danger is just too vague and, and no one has really been able to concretize it in a meaningful way. And I think that's why it's not getting traction on Capitol Hill.

23:40

Speaker C

Yeah, I think there's, there's just like so many ways that you can define safety. Like, so if you read Dario's essays, this thing he brings up over and over is like, okay, we can't let AI get in the hands of like authoritarian government.

24:04

Speaker A

Sure.

24:13

Speaker C

So there's like a real like safety narrative that you could do, which is that like, regardless of if our models are like pretty safe, they still need to be better than like China's, for example.

24:14

Speaker A

Yeah.

24:24

Speaker C

Because if China gets ahead of us, authoritarian government Right. It's like, very bad. So even if, you know, we're releasing models that are less, like, safe than we would like, as long as they're better than China's, that's still like a safety pro. Safety issue, right?

24:25

Speaker B

Well, they'll just be distilled within six weeks.

24:37

Speaker C

Yeah, but like, obviously, like, I would be very surprised if Anthropic keeps like, the same, like, guardrails of, like, API access.

24:41

Speaker A

Well, Buko Capital Bloke has a solution. He says it's simple. We kill Claude. It's simple. We kill the Batman.

24:48

Speaker B

Well, that was in regards to the Saspocalypse.

24:57

Speaker A

Okay, okay. Who knows? There's so many headlines and the timeline moves so quickly.

24:59

Speaker B

Anthropic antagonizing the Department of War, the open source community, the entire media industry, the general population, other developers, other labs, foreign governments, and nearly every single person on Earth. What is the plan here? Sell Claude subscriptions to aliens?

25:05

Speaker A

Edward is it ain't easy having principles.

25:19

Speaker B

Hackers used Claude to steal 150 gigabytes of Mexican government data.

25:23

Speaker A

That's crazy.

25:29

Speaker B

They told Claude they're doing a bug bounty. Claude initially refused. Hacker just kept asking and manages to successfully steal some documents. Apparently it's four state governments, 195 million taxpayer records, voter records, government credentials.

25:30

Speaker A

Anthropic investigated the claims, disrupted the activity, and banned the accounts involved. The company feeds examples of malicious activity back into Claude to learn from it. In this instance, the hacker was able to continuously probe Claude until he was able to jailbreak it. I was listening to someone, someone talk about, like, I, like, like the ability to jailbreak has generated me, like tens of thousands of dollars in profit. It was kind of like a hustle, like, mindset guy. And I was just laughing because whatever you're doing after you jailbreak, it is probably not good and so you should probably stop. But he was talking about, like, I can sell so many more courses now that I've jailbroken chatgpt or whatever.

25:49

Speaker B

Duran says not to worry, they'll hit usage limits before anything bad can happen. This was interesting. Rob Wiblin had a guest on his podcast. The guest is talking that saying every AI lab is working to make their AI helpful, harmless and honest. The guest thinks this is a complete wrong turn and aligning AI to human values is actively dangerous today. A nominative determinism. Because the guest name is Max Harms.

26:30

Speaker A

Max Arms. I feel that name. Maybe you got to go with Maxwell or something. I don't know. Perplexity.

27:01

Speaker B

Computer, computer, computer, computer. Launch Community Computer Vibrio.

27:08

Speaker A

What is Perplexity Computer? Let's pull up this video. Perplexity. The official account says Perplexity Computer Computer unifies every current AI capability into one system. It can research, design, code, deploy and manage any project end to end. Okay, so it should be able to get a soundboard app in the App Store, right? Manage any project code, deploy, design, research. It should be able to do that from start to finish. One prompt soundboard in the App Store using the TBPN sound effects, which are available online, which we have up there. This is a good benchmark. Let's give it a try. And you can give it a try at Perplexity. Go check it out.

27:14

Speaker B

I'm just very curious to see how this does. It feels like the. Again, going from consumer LLMs to a net new product that is objectively just as competitive. And we'll see. Bestsellers on Substack for finance are all doomers.

28:01

Speaker A

We got to do tbp.

28:21

Speaker B

And of course, Citrini is not.

28:22

Speaker A

This is so obvious. No, no, this is. Yeah, this is. We need to treat zone this a little bit.

28:23

Speaker B

He's a doomer.

28:27

Speaker A

He's not a doomer. Very bullish, very AI bull, but definitely shot to the top of virality and top of the charts on the back of Doom. I live this on YouTube. Like, you put a negative title up and you just get 10 times more views, but they're lower quality. And so you got to balance all that out. It's really hard to go viral with something like, everything's fine, everything's going well. Don't. Don't worry. Don't. Click this because you're scared. Click this because everything is kind of the same as it always has been. And you're. You're going to be fine. And this stuff's cool, but it's. It's not really going to change that much. It's. It's. It's going to be pretty incremental. Like, that is not getting clicks. You need to be. You need to be telling. Telling this whole tale. You need to be spinning a yarn. It's a bull market in yarn spinning, folks. Get ready, get out the yarn and start spinning. So, news from.

28:28

Speaker B

Get the gong.

29:18

Speaker A

Get the gong. Okay, hit me, tell me.

29:19

Speaker B

Stephen over at WAP says, we're excited to announce that Tether, the largest stablecoin company in the world, is making a strategic investment of 200 million into WAP, valuing us at $1.6 billion. Our partnership with Tether marks a major step in building the world's largest Internet market tether is committed to enabling everyone in the world to participate in the new Internet economy. The way humans work and create value is changing fast. The world needs both an open Internet market giving people a platform to conduct business, as well as a transparent payments network. Fast, cheap, global.

29:22

Speaker A

Exactly. And so, yeah, fast, cheap, global.

29:52

Speaker B

Isaac is saying what we're all thinking. Ready for this to be over. Talking about the Warner Brothers, Discovery, Netflix.

29:55

Speaker A

It's in the paper every single day. Every single day. Paramount increases Warner bid. We get it. You guys want to acquire this company. Mark Zuckerberg is planning a stablecoin comeback. They also have a banger deal with amd. And if you head to the bar this weekend and you drink too much, you should just say that you were the victim of a distillation attack. That's the correct turn of phrase. Anyway, thank you for watching Leave Us. Five stars on Apple, podcast and Spotify. Have a wonderful day.

30:05