Hard Fork

Tech Grapples With ICE + Casey Tries Clawdbot, a Risky New A.I. Assistant + HatGPT

71 min
Jan 30, 20263 months ago
Listen to Episode
Summary

This Hard Fork episode covers three main topics: the tech industry's response to ICE operations in Minneapolis and the role of technology in political violence, Casey's risky experiment with Moltbot (formerly Claudebot) as a personal AI assistant, and a Hat GPT segment discussing various tech news stories.

Insights
  • AI-generated and manipulated media is increasingly being used by government agencies to shape narratives around controversial events
  • There's a growing divide between early AI adopters who experiment with risky tools like Moltbot and mainstream users still cautious about basic AI adoption
  • Personal AI agents represent a compelling vision of the future but currently pose significant security risks and reliability issues
  • Social media platforms have become battlegrounds where both protesters and government agencies compete to control narratives through content creation
  • The gap between AI capabilities in coding versus other domains continues to widen, with programming seeing the most dramatic productivity gains
Trends
Government agencies developing content creation teams to control narratives on social mediaRise of local AI agents that can access personal data and servicesIncreasing polarization between AI early adopters and skepticsAI-generated media being used in political contexts to manipulate public opinionCorporate leaders making cautious political statements to balance employee and government pressuresOpen source AI tools enabling risky experimentation by tech enthusiastsVideo evidence losing its traditional role as definitive proof due to AI manipulation capabilitiesSmartphones becoming central tools in confrontations between citizens and law enforcementAI coding tools fundamentally changing programming workflows for experienced developersEnterprise AI adoption lagging behind individual experimentation due to institutional barriers
Companies
OpenAI
CEO Sam Altman made internal statements about ICE operations going too far
Anthropic
CEO Dario Amade criticized Minneapolis events; Claude AI models discussed throughout episode
Apple
Tim Cook sent internal memo calling for de-escalation; reportedly developing AI wearable pin
X (formerly Twitter)
Platform where manipulated images spread and government narrative battles occur
ICE
Federal agency conducting operations in Minneapolis with dedicated content creation teams
Rippling
Episode sponsor providing HR automation platform
Amazon
Accidentally sent employees calendar invite about Project Dawn layoffs affecting 16,000 people
TikTok
Experienced data center outage after transfer to US owners, triggering censorship concerns
LinkedIn
Introducing AI coding proficiency badges for user profiles
SpaceX
Considering June IPO timed to planetary alignment and Elon Musk's birthday
Steak and Shake
Added $5 million in Bitcoin exposure as part of burger-to-bitcoin transformation
Netflix
Producing series about FTX scandal called 'The Altruists'
Microsoft
Being sued by New York Times over alleged copyright violations
GitHub
Platform where Moltbot code is distributed and AI coding tools are prevalent
Telegram
Messaging app that can integrate with Moltbot, creating security risks
People
Sam Altman
OpenAI CEO who made internal statements criticizing ICE operations as going too far
Dario Amade
Anthropic CEO who called Minneapolis events 'horror' and wrote essay on AI risks
Tim Cook
Apple CEO who sent internal memo calling for de-escalation in Minneapolis
Alex Preddy
US citizen fatally shot by federal agents in Minneapolis while holding phone
Kristi Noem
DHS Secretary who brought right-wing influencers to ICE operations
J.D. Vance
Vice President who retweeted AI-altered image of civil rights attorney
Peter Steinberger
Developer who created Moltbot (formerly Claudebot) personal AI assistant
Andrej Karpathy
AI researcher who called AI coding tools the biggest change to programming in two decades
Caroline Ellison
Former FTX executive released from federal custody after 14 months
Michael Lewis
Author who listens to 'Let It Go' from Frozen on repeat while writing books
Tim Walz
Minnesota Governor who urged citizens to carry phones to document atrocities
Nikima Levy Armstrong
Civil rights attorney whose image was AI-altered by White House to show her crying
Federico Viticci
Mac Stories writer who first posted about Claudebot, inspiring Casey's experiment
Elon Musk
SpaceX owner timing IPO to planetary alignment and his birthday
Graham Granger
Alaska student arrested for eating AI-generated art in protest
Quotes
"The memes will continue"
White House spokesmanWhen asked about sharing doctored images
"Nothing is true and everything is possible"
Casey NewtonDescribing Putin's approach to blurring truth and fiction
"Programs like Claude Code and Codex are easily the biggest change to my basic work coding workflow in two decades of programming"
Andrej KarpathyOn AI coding tools impact
"It'll either be a good time or a good story"
Casey NewtonExplaining his risky Moltbot experiment
"We're all trying to find the guy who built the Torment Nexus"
Casey NewtonCommenting on Dario Amade's AI risk essay
Full Transcript
5 Speakers
Speaker A

Most all in one hr systems are a patchwork of disconnected and manual tools. Rippling is totally automated. If you promote an employee, Rippling can automatically handle necessary updates from payroll taxes and provisioning new app permissions to assigning required manager training. That's why Rippling is the number one rated human capital management suite on G2, TrustRadius and Gartner. If you're ready to run the backbone of your business on one unified platform, head to rippling.com hardfork and sign up today. That's R I P P L-I N G.com hardfork to sign up, I'm going Monk mode.

0:00

Speaker B

That's what I'm calling it, Monk mode when I. Cause I need to finish this book.

0:36

Speaker C

So I'm like, so you're embracing celibacy?

0:40

Speaker B

No, I'm locking myself in an Airbnb and I'm writing 14 hours a day.

0:45

Speaker C

And now that's actually the premise of the Shining, the Stephen King novel. So how is it working out for you?

0:50

Speaker B

It's not great.

0:55

Speaker C

Are you seeing any rivers of blood flowing down the hallways yet?

0:56

Speaker B

Red rum. But no, it's going well. I'm the home stretch now. I can sort of see the light at the end of the tunnel. It's very nice.

1:00

Speaker C

And tell us about the experience of like. Cause I've been in similar situations where you sort of look yourself in the mirror and you're like, all right, Newton, shape up now. Time to buckle down. No more visiting your little comfort websites anymore. You actually gotta, you know, make the clicky clacky noise on the keyboard. Does that actually work for you?

1:09

Speaker B

Yes. I figured out a productivity stack that works for me. Let me tell you that.

1:27

Speaker C

Tell us a stack.

1:30

Speaker B

So I bring my big ass curved monitor, I sit myself down, I put on my noise canceling headphones, and you.

1:32

Speaker C

Have a silent disco.

1:39

Speaker B

I listen to Somebody Told Me by the Killers on repeat, like infinite loop. And I imbibe dangerous quantities of the Celsius energy drink. And I write until I fall asleep.

1:41

Speaker C

Now, what is it about the Killers? Somebody Told Me. I don't know if you're younger listeners, you know, if you haven't heard, Somebody Told Me it's a song about how Brandon, the lead singer of the Killers, somebody told him that they had a girlfriend who looked like a boyfriend that he had in February of last year. Yeah, and something that's like a code phrase and it triggers something inside you. It's like, it like activates you. Like a sleeper agent.

1:54

Speaker B

Yes, exactly. Honestly. So here's the thing. Yeah, this is a tip that I picked up from Michael Lewis, the famous author. He said that he listens to Let it go from Frozen over and over again while he writes his books.

2:18

Speaker C

In case you don't know, Michael Lewis is a five year old girl.

2:29

Speaker B

Yes. And so I thought, well, that can't possibly work. But it's true. I tried it out and it works. It doesn't matter what the song is. What matters is that it has the right tempo. And after about the seventh time through the song, the words just melt away and it just becomes white noise and it puts you in kind of this flow state.

2:32

Speaker C

I can't wait to repeat this story and tell people that it's effective. And they'll say, well, how do you know all this? And I'll say, well, somebody told me.

2:51

Speaker B

I'm Kevin Roos, a tech columnist at the New York Times.

3:04

Speaker C

I'm Casey Noon from Platformer.

3:07

Speaker B

And this is Hard Fork.

3:08

Speaker C

This week, phone to phone, combat in the streets. We'll talk about the tech industry's response to ICE killings in Minneapolis. Then it's the reason Mac Minis are sold out across San Francisco. We'll tell you about the sensation now known as Moltbot. And finally, it's time once again for HatGPT.

3:09

Speaker D

Foreign.

3:33

Speaker C

So today we want to start by turning our attention to Minneapolis and the ongoing confrontation between ICE officers and protesters. Kevin and I have both been horrified by the fatal shooting of Alex Preddy, the second fatal shooting of a US citizen by federal agents this month. And while we don't often turn our attention directly to matters of politics, protests on the show. We both believe that the tech industry is playing a role in this story and that we would be remiss if we did not look at the elements of what is going on in Minneapolis that are ours.

3:40

Speaker B

Yeah, I've been really horrified by what's been going on in Minneapolis. I have family in Minneapolis. They're scared. Some of them are leaving. It is a really tense and hard time in American civic life right now. And I think we should talk today about not just how the big tech companies and their leaders are acting right now, but what role technology itself is playing in some of the reaction to these shootings.

4:20

Speaker C

Yeah, and I think the tech industry is playing a significant role in what is happening right now. It is an infrastructure provider behind many of the key tools that are being used in this fight. I would say primarily a lot of the surveillance tech that ICE is relying on to identify and detain migrants, but also the social platforms where Once again, we're seeing violence being turned into content and influencers on both sides trying to make it go viral. And then you just have the dimension of the fact that there is actually now state power, the government that is using these tools in ways that we have not seen before in America. So there's a lot to talk about here.

4:44

Speaker B

Yeah. So let's start with the CEOs at the start of this week. Sam Altman of OpenAI, Dario Amade of Anthropic, and Tim Cook of Apple, each made statements or sent internal memos to their employees kind of lamenting what was going on. These were slightly different in the way they were worded, but they all sort of came after. Employees at some of these companies have been urging their CEOs to speak up, to say something. Sam Altman had an internal slack message to OpenAI employees that said that he had spent some time talking with Trump officials and he described ICE as going too far. He also said that he thought that part of loving the country is the American duty to push back against overreach. Dario Amade also discussed it in a post he made on X where he called the events in Minnesota horror. And Tim Cook sent an internal message to Apple employees saying that, quote, this is a time for de escalation. So we're not seeing the kind of full throated denunciations that we might have seen with the first Trump presidency from some of these leaders. But they are saying, at least to their own employees, that they are not happy about what's going on.

5:24

Speaker C

Yeah. And I would say that in several of these cases, we are seeing these CEOs say about the least that they possibly could. Like you can tell, I would say, particularly in the Altman and Cook statements, they are very afraid of irritating the White House. And at the same time they're facing a lot of pressure from their employees to say something. And so I think this represents their best efforts to find a middle path. Personally, I found it a little weak.

6:37

Speaker B

Yeah, I, I like, as a general rule, I don't like when CEOs or other business leaders talk insincerely out of obligation. I think we saw some of that several years ago from the left. I think we saw some of that during the sort of first months of the Trump administration aimed at appeasing the right. Like, I just don't like when CEOs talk in ways that are not their own views. In some of these cases. I'm sure these are sincerely held views, but it just seemed to me like, why are we doing this?

7:05

Speaker C

Well, I think the reason that we're doing this is because these guys are politicians, too. They are, in their own way, heads of state. They represent hundreds of millions of users and giant employee bases. And so they have to get in there and play politics. And I'm sure they hate it. But the fact is that in these cases, they have to.

7:33

Speaker B

Yeah, and I think we should say, like, it is risky to do this in this current environment. Like, there was a post from Chris Ola, who's one of the co founders of Anthropic, who posted a very, like, heartfelt and I think, sincere message about how horrified he was by what was happening in Minneapolis. And Katie Miller, the wife of Stephen Miller, the sort of White House advisor, reposted it and basically said, look, this is the AI companies. They're building this bias into their models. If they talk like this, this is how their AI models are going to talk. So there is a cost politically to saying something right now. And so I don't want to be totally dismissive of the people who are doing that. I think they are displaying some backbone. But as you said, it's like in some of these cases, they're saying about as little as they could. So, Casey, one of the last times we talked about political violence on the show was after the assassination of Charlie Kirk, when we talked about some of the online commentary and how weaponized it was feeling out there, how these sort of platforms were becoming places full of rage, bait, and incendiary provocative content, and how that was sort of, like, bad for us as a society. I'm curious, like, does this feel like another case of that to you, or what are you seeing out there?

7:49

Speaker C

I think there are definitely ways where it's similar, Kevin. People are drawn to conflict, and so are social networks. Right. This is why Charlie Kirk was so successful in his way, is that he would create these conflicts by visiting college campus, staging debates, and those conflicts would then, you know, go viral in ways that served him. We're seeing a similar thing here where the Trump administration uses spectacle as one of its main weapons. And if it can create a crisis that begets a bunch of video, that can serve their interest, too. I think the real difference here is that Charlie Kirk did not have the power of the state behind him and Trump does. Right. Like, Trump has an army behind him. He has thousands of ICE agents and other federal agents. And so we have just seen as after he return to power, the way that various Trump administration officials would bring along influencers and their entourages when they would go about their business. So like last year, Kristi Noem brought a bunch of right wing influencers with her when she like, visited an ICE operation happening in Portland. And another important thing here, Kevin, is the way that the Minneapolis operation began after a video alleging fraud in daycare centers went viral. So in this very real way, a social platform is setting the President's policy agenda, such as, very different from the Kirk stuff. The stakes are a lot higher here.

9:03

Speaker B

Right. I think in a lot of public spaces, like, what's happening in Minneapolis is not a 5050 issue. Like, I think it is actually a pretty widely held belief that federal agents should not be shooting protesters in the streets. But on X, it's like you see people defending it. You see people spreading these like, conspiracy theories. You see people circulating this manipulated footage and these like deep faked images that purport to show like, you know, more of a violent standoff that there was. And it just seems like a platform that has either decided to put its thumb on the scale in favor of the administration and its responses or where just a lot of this is going on. On checks.

10:24

Speaker C

Yeah, I mean, you know, you predicted after Trump won that X was going to become state media. And I think that it basically has. This is where the Trump administration goes to get their policy ideas and to advance, you know, whatever operation in working on at the moment. And so you have just seen all across acts, administration officials making their case that this ICE campaign is justified or necessary.

11:05

Speaker B

Yeah. And I think another difference between what's going on now and the Charlie Kirk assassination is that the government itself is getting involved in actually producing some of this content. There was a great story by two reporters at the Washington Post last year about ICE's media strategy, which was based on a bunch of internal agency documents and messages they got got. And what I found remarkable about this is that ice, like some other federal agencies now essentially has like a, a team of content creators who work at the agency and who are spending every day trying to sort of like steer the narrative using, you know, the same techniques that a brand or a celebrity would. They're out there, you know, using paid social media tools. They have like a team of producers and editors and, and video makers who are out there making clips. It seems like these agencies are not just thinking that they jobs are enforcing the law. It seems like they are also trying to control the narrative around how they enforce the law.

11:28

Speaker C

And I would say almost to the point where, like, winning on social media has become almost the entire point. Right. Like, yes, there are policy objectives here. But in a very real way, they seem secondary to getting the most retweets.

12:28

Speaker B

Yes.

12:39

Speaker C

Yeah. So that feels new.

12:40

Speaker B

Right? So in addition to the government actively working to kind of seize and control the narrative online about these. These events, we also have the rise of AI tools that are making it very hard for people to know what's real and what's. So my colleagues at the New York Times had some reports on the way that AI was being used to alter images of Alex Preddy, the victim of this shooting. One was doctored to make him look like he was pointing a gun at an agent, although it was actually a phone. Another was used to enhance a blurry freeze frame from a video of this shooting.

12:41

Speaker C

And we should put enhance in air quotes because it didn't actually enhance it. It just made a bunch of stuff up.

13:16

Speaker B

Right, exactly. So it was sort of upscaled in this way that, like, introduced some errors into the image. So, Casey, I can't imagine this was surprising to you. We've seen this kind of thing after most major news events in the past couple of years, but did anything stick out to you about the. The actual media that was being circulated?

13:21

Speaker C

Well, I mean, when it comes to, like, online detectives who want to participate in the story of the day, that is not new. That's been going on for well over a decade. But, Kevin, the White House released this AI altered image of a civil rights attorn named Nikima Levy Armstrong, who went to an anti ice protest last week and was arrested. And in the aftermath of her arrest, the White House puts out this image to make her look like she's crying her eyes out, when in fact she was not. And it's kind of a weird story because, like, DHS had already released the original unaltered image, like, before this altered one came out. But once the one of her crying, which again, she wasn't doing, comes out, like J.D. vance, the vice President, retweets the image. So the thing that this just has me thinking about is this phrase that got associated with Russia in the 2010s, which is, nothing is true and everything is possible. And it just referred to the way that Putin and his lieutenants would try to shape a world where you just sort of couldn't trust anything that you were seeing anymore. That the state invested huge resources into just blurring the line between truth and fiction. And if you wanted to know what was really happening, you were just going to have to invest enormous amounts of resources. And the way that this serves the administration is this concept we've talked About a few times on the show that I love, which is called the Liar's Dividend, which means that because people know that evidence can be fabricated, no matter what evidence you see now, you're always wondering, was this maybe fabricated? And that just erodes trust in our society more broadly. So I think it's hugely irresponsible for the vice President to be sharing these sorts of images. But when the administration has been asked, what, what are you doing here? Like, why are you sharing these obviously doctored images? A spokesman for the White House just said, the memes will continue, which I honestly found chilling.

13:38

Speaker B

Yeah. What do you think the platform should be doing about this? Should they be taking down doctored images? Should they be applying some sort of label? Like, what is the correct response here?

15:34

Speaker C

Well, so I go back to the way that platforms handled this during the first Trump administration, where they were much likelier to act. You know, during the 2020 election, when President Trump was sharing misinformation about mail in voting ballots, Twitter put a label on that directed people to high quality information about how mail in voting actually worked. Now, of course, Twitter doesn't exist. Elon Musk owns X. And if anybody is going to correct anything, it's going to be in this Community Notes feature, which is just sort of this volunteer crowdsourcing effort. And I'm glad that it exists in the, you know, like, I'd rather have it than not have it. But of course it's unpredictable, doesn't always appear in a timely fashion. It's my view that platforms have a strong interest in labeling, obviously, AI generated content. I think that if you want people to trust your platform, you should take steps to help them understand what they're saying is real and false. But of course, in many cases, they're going to be terrified to do that because there might be a mean tweet about them.

15:42

Speaker B

Right, exactly. And I think to. For me, what I've been thinking about is less the platform response and more like I've been thinking a lot about the AI companies. Obviously, you know, I have AI on the brain sort of permanently. But I think this is actually why these companies should support regulation for the technology that they're building. Because right now they're in kind of this fragile state where they. They can't really speak up against the administration in very forceful terms without risking some kind of retaliation, some new law. Just a bunch of mean tweets. Like, there are actual political risks to taking a stand on something, even something. If they were to come out now and say, oh, we're going to, like, apply a label to AI generated images that are misleading, or we're to have some kind of watermarking system. I think some people in the Trump administration would interpret that as, like, a response to what they were doing, and they would get mad about it.

16:40

Speaker C

Yeah, they call it censorship.

17:32

Speaker B

Exactly. So I think this is actually the case for, like, regulation that is passed by Congress, that is signed into law, and that persists across administrations, so that you don't have to kind of operate in this, like, weird limbo where you're trying to do the right thing, but you're scared that the administration will punish you for that, and anything that you do is going to be seen correctly or not, as a reactive counter. Countermeasure, instead of just having one set of rules that apply across all administrations.

17:33

Speaker C

I think one set of rules is a great idea. I think we should pursue this. I also think that if you find yourself in a situation where there's a presidential administration that decides it wants to lie about what is happening and is willing to fabricate evidence to support those lies, platform policy is only going to help you so much. I want it to exist. I want to be able to trust in it. I just want to also point out there are limits to.

18:01

Speaker B

To it. Exactly.

18:22

Speaker C

I want to talk about this other dynamic here, Kevin, which in my head, I've been calling the battle of phone versus phone.

18:23

Speaker B

Right.

18:30

Speaker C

Smartphones are playing a central role in this conflict. And while, of course, phones have long been part of protests, I feel like we're seeing a direct confrontation between the state and the protesters involving the phone in a way that we haven't quite seen before.

18:32

Speaker B

Yeah, this stuck out to me, too, as I was looking at some of the awful videos of the shooting of Alex Preddy. Everyone's holding phones. Right. This is now, like, a standard part of any confrontation with law enforcement in this country is if there's anyone around to witness it, there's going to be phones pointed at it. Someone is going to be filming that.

18:51

Speaker C

Absolutely. Alex Preddy was holding a phone when he was shot. Renee Goode's partner was filming when Renee was shot. And videos of their killings have spread on social networks and I think have significantly shifted public opinion against the Trump administration. You know, Governor Tim Waltz of Minnesota said, carry your phone with you at all times in order to help us create a database of the atrocities against Minnesotans. Okay. So this is a dynamic I think we have seen before. You can go all the way back to the Arab Spring and look at the way that people were posting on Twitter, you know, images of, you know, state violence. Here is, I think, the twist. The Trump administration has been doing two things. One, they've been putting pressure on American citizens not to film. Right. The DHS Secretary, Kristi Noem, said in a press briefing in July, violence is anything that threatens them and their safety. So it is doxing them. It's videotaping them where they're at when they're out on operations. Also, Trisha McLaughlin, who's the DHS assistant secretary for Public affairs, said in a statement, quote, videotaping ICE law enforcement and posting photos and videos of them online is doxing our agents. We will prosecute those who illegally harass ICE agents to the fullest extent of the law. So there's this recognition in the Trump administration that being filmed and having your video put on social media is dangerous to you. And we've started to see a lot of threats against people who are doing this.

19:14

Speaker B

And we should just say, like, it is not illegal to film law enforcement. It is not doxing to post the name or the. The image of a federal agent who is operating on behalf of the state. Like, these people wear badges, or at least they're supposed to, because we want this sort of public accountability. And if they are acting in ways that are illegal or unethical, we want to be able to hold those individuals accountable. But I agree that there's this sort of feeling among certain members of the administration that these phones present a danger.

20:39

Speaker C

Absolutely. And when you consider the fact that all of these ICE agents are just permanently masked when they're out in the world, it makes you wonder, because, Kevin, what are they more scared of? Is it the guns that some of these protesters may be legally carrying, as Alex Preddy was, or is it the phones that are in their hands? Is it the threat that someone will find out who they are and what they're and link it back to their actual identity? There's this final thing that we're seeing in this phone versus phone battle, though, Kevin. And this is where I also think that we're in just kind of some new territory, which is the Trump administration is bringing their own phones to the battle, right? They're bringing along those conservative influencers. They're giving them talking points and ideas for what sorts videos they might post. There's a long tradition of the government using celebrities and influencers to promote their worldview or, you know, various things that they're doing. But the thing in Minneapolis that just seems really different to me is it is almost like Phone to phone combat between Americans. Right. Where you have the protesters who are trying to document what is happening, and you have the administration, which is trying to shoot photos and videos to suggest that these are sort of like very effective anti immigration rays. Yeah.

21:12

Speaker B

And I think for me, the question is what happens when the videos, the artifacts that these phones are producing themselves become untrustworthy? You know, when, as you mentioned, people were filming the Arab Spring to try to bring accountability to their leaders, when people started taping their encounters with law enforcement 10 or 15 years ago, it was seen as sort of verifiable proof that what you saw filmed from these cameras was what actually happened. And I don't think we can make that assumption anymore. I think we are no longer in the sort of era of video being proof that something happened or pictures or any other form of media. I think we have entered this kind of postmodern state where, like, things need to be interpreted in context and there is no one canonical source of truth that is true.

22:21

Speaker C

And yet, at the same time, like, I have a more optimistic view of where we are, at least in this moment, because this particular shooting, the Preddy shooting, was shot from many different angles. Journalists put a lot of work in to verify the videos that were posted. And interestingly, as they spread on social media, there was not a big movement to say these videos are fake. There just wasn't. And in fact, fact, I think the video of Preddy's killing shocked the conscience of many Americans, some of whom had previously been fully aligned with the Trump administration in these anti immigration raids. Right. We saw the NRA criticized the administration's portrayal of Alex Preddy's killing, which I think really shocked a lot of people. And so in a very grim moment, this was the silver lining for me me was that actually Americans do still trust the video that they are seeing, including the video that's being shared on social media. And they are not immediately leaping to, I am being lied to. This is just sort of manufactured evidence from partizans. Now, of course, the question is, how long will that state of affairs hold as AI tools improve? Does that bargain sort of break? But I don't know. There's something that is happening here where it's like, if enough people see it, if the video is captured from enough different angles, and if the administration isn't willing to just lie at an ever greater scale and manufacture ever more evidence to support their version, maybe the truth survives here.

23:17

Speaker B

Yeah. Well, that's a hopeful note to end this discussion on, and I'm glad we talked about this. Next up, why Casey is giving over his entire digital identity to an app called Moltbot.

24:54

Speaker E

This podcast is supported by Outshift, Cisco's incubation engine. Everyone's racing to build the biggest AI model, but here's what's missing. We didn't scale human intelligence by making smarter individuals. We built shared language so collective knowledge could spread quickly across tribes. AI hasn't had its cognitive evolution yet. Right now, when one AI agent solves a problem, that knowledge ends with it. Every agent starts from scratch. That's why Outshift by Cisco is building the Internet of Cognition infrastructure that lets agents share intent, build persistent memory, and innovate together together. Learn more@outshift.com As a small business owner, you don't have the luxury of clocking out early. Your business is on your mind 24 7. So when you're hiring, you need a partner that works just as hard as you do. That hiring partner is LinkedIn Jobs. When you clock out, LinkedIn clocks in. LinkedIn makes it easy to post your job for free, share it with your network and get qualified candidates that you can manage all in one place. Post your job LinkedIn's new feature can help you write job descriptions and then quickly get your job in front of the right people with deep candidate insight. Either post your job for free or pay to promote. Promoted jobs get three times more qualified applicants. At the end of the day, the most important thing to your small business is the quality of candidates. And with LinkedIn you can feel confident that you're getting the best. Find out why more than two and a half million small businesses use LinkedIn for hiring today. Find your next great hire on LinkedIn. Post your job for free at LinkedIn.com hardfork that's LinkedIn.com hardfork to post your job for free. Terms and conditions apply.

25:32

Speaker B

Fly Casey did you watch the Alex Honnold free solo of the Taipei 101 on Netflix?

27:05

Speaker C

I was able to watch about 60 seconds of it and my palm started sweating so much I thought I was an 8 mile.

27:12

Speaker B

Yes, it was very nerve wracking. But speaking of extreme risk takers, daredevils, people who take their lives into their own hands. I hear you have actually been experimenting with some new technology this week.

27:19

Speaker C

I have Kevin. I have installed multiple bot formerly known as Claude bot onto my laptop and I've said damn the consequences.

27:33

Speaker B

Yes, and this is kind of the AI version of a free solo up the Taipei 101. My understanding is this is like insane behavior. So Please tell me about Claude Bot, which we should say is not affiliated with the Anthropic Claude Chat bot. It is spelled or was spelled C.

27:42

Speaker C

L, A W D. Claude like a lobster is Claudia.

28:02

Speaker B

Yes, and has since changed its name to Moltbot, I guess because of some copyright concerns.

28:07

Speaker C

Well, when lobsters shed their shells, they're molting. And so now the former Clodbot is Moltbot.

28:12

Speaker B

And this thing was the talk of the town over the past weekend. I saw many, many people who work in tech in San Francisco becoming obsessed with this tool, talking about how it was running their entire lives. And I am glad that you experimented with it so that I didn't have to and now you can tell me about it. But let's just do our disclosures real quick because we are gon talk about Anthropic and other companies in this.

28:17

Speaker C

Well, my boyfriend works at Anthropic and.

28:40

Speaker B

I work for the New York Times which is suing OpenAI, Microsoft and Perplexity over alleged copyright violations.

28:42

Speaker C

Yeah, well, so I first saw claudebot on a post on Mac Stories by the great Federico Viticci. And Federico, like me, is always ready at the drop of a hat to make some new piece of software his entire personality. And Claude Bot looked like one of those things. It is a free, free, open source personal AI agent that you can run on your computer, plug it into various systems, services and AI tools and if all goes according to plan, you will have something resembling a little genie inside your computer who can work for you.

28:48

Speaker B

So Kasey, who created this bot, Moltbot.

29:24

Speaker C

Was created by a developer named Peter Steinberger. He has worked on many, many kind of cool, hacky tinkery projects you to his GitHub and see like dozens of projects that he has worked on that are all in this kind of do it yourself vein. He started Claude Bot to scratch his own itch. Basically he just observed that there was no great personal assistant agent yet on the market and he wondered if he could figure out a way to use Claude Code through WhatsApp. And that's how the whole project began.

29:28

Speaker B

Okay, so you saw this and thought I should try this. So how easy or hard was it to setup?

30:02

Speaker C

Well, I would say the setup is pretty easy. If you've installed Claude code it will feel quite familiar. You go to the Multbot website, you grab a little one line piece of code, you put it in your terminal and it will begin the setup process.

30:08

Speaker B

And I saw a lot of people over the weekend talking about how they were buying Mac Minis to install this thing. Sort of like give it its own computer. What is the deal with that? Why are people doing this on separate computers rather than their main computer?

30:22

Speaker C

Yeah, well, like when you free solo a giant Skype scraper. Kevin, there is some risk involved with installing an AI agent onto your main machine. A few of the risks I would name. There is the risk that somebody can just kind of hack into your computer. Claudebot can connect to a messaging app like Telegram. If somebody hacks into your Telegram account, theoretically they could actually just take over your entire computer and do anything on your computer that you could.

30:35

Speaker B

Yeah. So we should just say like, don't.

31:04

Speaker C

Do this, don't do.

31:05

Speaker B

Do as Casey says, not as he does. This is a very bad ide and you like to, I don't know, live on the edge a little bit or. What were you thinking?

31:06

Speaker C

Yeah, well, what I was thinking was, first of all, I have like disabled the Telegram integration. So I have, you know, gotten rid of that particular attack surface. And I'm also being careful.

31:16

Speaker B

How are you buying your drugs?

31:25

Speaker C

No comment. Yeah, so anyway, that's one of the risks. You have the prompt injection attack risk, which is that, you know, you will visit a website that has some hidden instructions buried in it and the agent will see it and will follow those instructions. That's very hard to protect against. And so basically, if you're going to use something like this, you would be better off putting it in what they call a sandbox, some sort of contained environment where it only has access to a narrow range of tools and nothing that has access to, for example, your bank account. So that's the right way to do it. I tried to do it in a way that is undeniably risky, but, you know, I've tried to take the precautions I can. I'm only connecting it to services that, you know, are not life or death for me. You know, as I'm sure we'll get into. The main thing that I've been doing with claudebot is just trying to build a personal daily briefing for myself. I do not have it working for me as like a full time AI employee who's, you know, out browsing the web and doing a bunch of, you know, errands on my behalf every day.

31:30

Speaker B

Right. Which is the kind of thing that I saw people experimenting with or at least, you know, posting about this weekend. And like, we've been talking a lot about like agentic AI systems. How is this different? Is it just that it can like connect to stuff that clog code or Codex or all these other coding tools can't.

32:29

Speaker C

Yeah. So the first thing to say is that, yes, this actually is a lot like Claude code or maybe Codex from OpenAI. And you can get a lot of the same benefits out of one of those tools as you can with claudebot. Claudebot, though, tries to offer a few things to distinguish it. So, number one, it runs locally on your computer. That's important to a lot of people. There are actually ways to restrict the amount of data that it is sending back into the cloud. Cloud. Two, it has a better memory system than CLAUDE code does. Currently. You know, in CLAUDE code, you're always trying to cram everything into a context window, and often you'll run out of context window, and it has to compact the conversation, and you often feel like you're just kind of starting from scratch and Cloud can't remember what you did yesterday. What moltbot does is it just writes memories to a markdown file, and then it continuously revisits that. And in my experience, it has been a little bit better at understanding. For example, if you built a tool with it the previous day and you want to make a tweak to it, it knows where to look to go find that project and edit it. Which sounds very simple, but it's actually something that Claude code is very bad at and has frustrated me constantly.

32:47

Speaker B

Like, if North Korean state hackers are, like, infiltrating your machine and stealing your money, like, it'll. It'll, like, sort of leave them a nice little markdown file that they can return to the next time they want to steal from you. That's right.

33:54

Speaker C

It'll say, here's exactly how to break into the vault.

34:05

Speaker B

Okay.

34:07

Speaker C

Yeah.

34:08

Speaker E

Okay, good.

34:08

Speaker B

You're. You're really reassuring me here. So give me a sense of, like. Like, what Moltbot is doing for you. Give me, like, one example of something that it has done that other AI tools have not been able to do.

34:09

Speaker C

Well. Yeah, so, you know, I did again, Danger Zone, but I wired it up to my email and my calendar.

34:19

Speaker B

Oh, my God.

34:24

Speaker C

And I thought I didn't know I.

34:25

Speaker B

Was in the presence of Evil Knievel. You are truly doing the most out here. What happened to you?

34:27

Speaker C

I'm always on the hunt for something interesting to write about, is the honest answer to that question. And I want to see if maybe this would be one of those things.

34:36

Speaker B

As one of my. My mentors used to say, it'll either be a good time or a good story. There you go. There you go.

34:43

Speaker C

And I would say this is like probably more in the realm of good story than good time.

34:49

Speaker B

Yes.

34:53

Speaker C

But yeah, so you know, I connected it to my, my email, my calendar, a few other services, and I thought it would be great if in the morning when I visited my Clodbot and I just write good morning, I would get a nice briefing that would have things like, here's the weather, here are some important emails you should answer. Here are the calendar events that you have coming on today. And then I thought, okay, now I want to make it extremely to me, right? Because like, this is maybe where I can help me get beyond the very basic stuff like email calendar. And so I said, I want you to tell me whenever there is a new pro wrestling pay per view on tv. I want you to tell me when there's a new episode of RuPaul's Drag Race every Thursday. I want you to tell me what movies are coming out this week and who stars in them. And give me a one sentence synopsis. And so I just start, you know, again you're typing into the box, you're saying, here's what I think the best daily briefing would look like for me. And by hook and by crook, over a handful of days, I got something, Kevin, that works about 70% of the time.

34:53

Speaker B

This is why you can't buy RAM, because you need your RuPaul's Drag Race briefing.

35:53

Speaker C

Yeah, it's definitely me and not the like, you know, 800 data centers that are being built all over the world.

35:58

Speaker B

I have not seen this in action yet. Can you just show me or like tell me how you actually use this thing?

36:07

Speaker C

Yeah, I can show it to you. So let's see. I'll open up my Claudebox here and this is running. You're going to see this in a browser window, but this is running locally. So let's see.

36:11

Speaker B

Can I talk to it?

36:25

Speaker C

No, I have it trained on your voice to not respond to anything you said.

36:26

Speaker B

Okay.

36:30

Speaker C

But you know, I could just type good morning in here and it'll think for a second and then it'll like pull up a briefing. But you know, you can see it's like, it's like a plain beige chat with window, nothing fancy. A lot of people are messaging Multbot through third party messaging apps. So there are ways to hook it up to WhatsApp, Discord, Telegram, various other messaging apps. That's very convenient. Means you can easily access it from your phone. But of course then there's a lot of risk involved.

36:32

Speaker B

What, what can it do? Show me one thing. It can Do.

37:01

Speaker C

Yeah, well, so it's going to pull up the briefing that I've been putting together. So you're going to see, it's going to come up here, it's going to show you like, here's the weather, here's some emails you might want to look at. Here are some overdue tab tasks that you have in your to do app. You're also probably noticing this taking a really long time. Now I want to say something about this thing that I did, which was I wanted to be able to get this sort of daily briefing that I've described just by typing the words good morning into claudebot. And now I've already done this a few times this morning as I was testing and preparing for this segment. So that might be why this broke. But I just want to point out that in this moment, the thing that I built did not work. So, you know, one reason why I was excited to talk about Claude Code on a recent episode was I felt like this had gotten to a point where if you told Claude Code to do something, it basically would just work. And I am telling you, I do not personally believe in my own experience that claudebot is yet at that level. I now run into this every day that I use it, where I will tell it to do something and something will break along the way and it won't quite work. So what is going on under the hood? Here's the great part of Vibe coding. I have no idea and I never will.

37:03

Speaker B

Yes, this thing is wild. And I think we should just make again a little caveat that you should not install this on your machine unless you know what you are doing and are willing to take some risks.

38:21

Speaker C

Put it on your ex boyfriend's machine. So, I mean, look, here's the thing. I entered into this thinking, thinking this might be something really fantastic that could be a new way that I use my computer. And over the first few days of using it, my expectations have been managed, I would say, very far downward. The security risks, as you mentioned, are like not at all resolved. This is not a safe tool to use for a lot of things. But honestly, in some ways just as important to me, it just does not enable that much new stuff.

38:33

Speaker B

Right?

39:10

Speaker C

You can spend a lot of time hacking around, wiring up various services, having that feeling that you're being productive, and then when you look back on it a day later, you're like, okay, well I have a very like complicated new widget that's telling me the weather.

39:11

Speaker B

So why are people so worked up about this? Like, like the Way that people were talking about Claude Bot Mold Bot over the weekend, it was like the rapture had happened and Jesus had come back. So why are people so excited about this?

39:25

Speaker C

So here's my best guess. I think that Clark claudebot is a very compelling vision of the future.

39:38

Speaker B

Right.

39:47

Speaker C

We have lived in a world where to do anything, basically, since the dawn of personal computing. You've needed a dedicated piece of software that had a discrete set of capabilities. And claudebot and AI come along and they start to chip away at that vision of the world. And they say, what if instead of having a bunch of apps on your computer, there was just a genie who lived inside your computer? And every time you had a wish, you could go to the genie and say, genie, I wish for you to make me a website. And then abracadabra, to quote Lady Gaga, it exists. That is something that is really interesting. And while it is only barely beginning to become possible, I can imagine what a good version of Siri would look like in the year 2036.

39:47

Speaker B

Yeah, I mean, that was. Yes, it may arrive a little sooner than that. We'll see. But that was my first thought, was like, this is actually, actually the thing that Apple said they were building into Siri. It seems like this one random developer has actually gone out and built. And maybe it doesn't work all the time, maybe it's got all these risks, but it does feel like people are figuring out how to kind of stitch these tools together. I'll tell you about one example of an interaction with Moltbot that got me thinking somewhat excitedly and also nervously, about the future. This is a user named Alex Finn who posted this screenshot of his Moltbot, and he says, can you make a reservation for me at this restaurant? And then it said, the bot says, I can't book directly through OpenTable right now, but if you want me to just call them, I can call and book under your name. And the bot actually did sort of use a synthetic voice from elevenlabs to like, call the restaurant and place this reservation. So that to me was like, you know, And I have not verified that this example works. I have not tried this on my own machine. I will not be installing this bot because I value my. My security and my data. But this is the kind of thing where, like, if it does work in a sustained way, this would actually, I think, be a very useful tool for a lot of people.

40:34

Speaker C

Yeah, and this is honestly just what it is often like to cover technology, is that you are writing about stuff that is directionally Correct way too early and barely works. And I think a lot of people's relationship to stuff like that is they don't want to know about it because it's irrelevant to them. But I think for you and me, it's really important to pay attention to that frontier because it's been our observation, particularly in the AI era, stuff that barely works in January might be pretty fricking good by November. So that's why I have my eyes on this one, is I think you're seeing it starting to capture people's imaginations and say, well, what if there was a safe version of this? What if this was just natively part of the operating system? What if I could trust it, you know, with all of my personal information? Maybe I would never have to visit OpenTable ever again.

41:55

Speaker E

Right.

42:46

Speaker B

And what a world that would be.

42:46

Speaker C

Yeah.

42:47

Speaker B

Casey, I wanna talk a little bit about this tweet that I got a lot of heat for over the weekend.

42:48

Speaker C

A heat tweet?

42:54

Speaker B

A heat tweet. So I was sort of watching all this happen, everyone, you know, experimenting, tinkering with claudebot, Moltbot, and you know, I started feeling like the gap that we are seeing between the people who are early adopters on this stuff is really diverging from the people who are more cautious, who are more skeptical, who really are only sort of using chatbots in the way people were using them in 2023. And I would say people got like very mad at me for making this. They thought I was sort of endorsing this early adoption mindset where you like turn over all of your decision making to an AI chatbot and you let sort of these like Claude swarms, like run your life. And I guess I'm curious whether you think I was right there or wrong. Like, do you think there is something notable going on?

42:54

Speaker C

Well, let's go to the text. Listen, if you ever want to know what's a gift you can give any man, ask him to read one of his tweets out loud. So, Kevin, why don't you read us this tweet? Okay.

43:45

Speaker B

I wrote I follow AI adoption pretty closely and I've never seen such a yawning inside, outside gap. People in San Francisco are putting multi agent cloud swarms in charge of their lives, consulting chatbots before every decision wire heading to a degree only sci fi writers dared to imagine. People elsewhere are still trying to get approval to use copilot in teams, if they're using AI at all. And the backstory here is that I, I had just been meeting with some groups about their AI policy. You know, these are schools, nonprofits, people who sort of want to know, like, how should we be using this stuff? And I've been struck recently that so much of what people are struggling with is. Are the same things they were struggling with a couple years ago. Right. It's like, what is my enterprise policy about letting people use AI? Should people be allowed to use AI to take notes in meetings or write performance reviews? It is like kind of these institutional rules for the sort of last generation of AI products. And what really has been striking to me is like, how little people seem to believe that the future and the past are going to look very different from each other, that we are like heading into something new and strange and something that requires us to shift the way we think about these things.

43:55

Speaker C

So I think that what you're hitting on is a dynamic that has long been predicted among those who say that the diffusion of AI technologies will be slower than the accelerationists think, because it will just run into a lot of institutional roadblocks and bottlenecks along the way. Like here in the real world, we do have IT policies, we do have companies that aren't going to let you install whatever AI tool you want in your life laptop. And we have a startup ecosystem full of wireheads that doesn't have any IT policies. And the minute that some crazy new insecure open source project comes out, they put it on their laptop to see what they can do. And so that dynamic, I think, is going to persist. And I think the question is, is there any real alpha to be gained by these wireheads who have access to claudebot and who can try any new AI tools and can lean all the way in? Or do they wind up just kind of spinning their wheels a lot and running into a bunch of security problems and get their GitHub account compromised and have phony crypto coins associated with their names? There's going to be some variation. People are going to have different outcomes. But I think that's what we want to watch.

45:15

Speaker B

Yeah. And I'm worried about this sort of new axis of polarization that I think is starting to happen right now. It's like, do you think this technology is real and important, or do you think think it is fake and overhyped? And I think we are starting to see the results of that polarization where the minute something new comes out, you have, you know, thousands of people rushing to try it on GitHub and install it on their machines. And you have other people saying, like, stop this train. You're all huge nerds and losers and we should, you know, focus on the problems that people really have. And look, right now, I think this is debatable whether these tools are actually supercharging people outside of maybe coding, which I think is own special domain. Maybe this is all productivity theater and people are going to regret installing these things and they're going to go back to using Google Calendar and Notes app and whatever else they're using the rest of us. But I worry that in the world where this stuff does actually continue to improve, where it's not just productivity theater, where the people who are using things like whatever the 17th version of Cloudbot is that actually works, those people may be way more productive than the people who are still doing things by hand. In the same way that I think if you are a coder today who writes your code by hand and doesn't use any of these AI assisted coding tools, I think you're actually just much less competitive and much less well suited to succeed than the people who are using the tools that they're using.

46:25

Speaker C

I mean, maybe. I think that if these tools wind up being as productive as you were saying, suggesting they will sell themselves to the mainstream. Right? Like, you don't have to convince anyone to get an email account or like buy a smartphone. They know that's going to be helpful in their lives. I think the concern that you're seeing among people who are more skeptical, and part of it is just, yes, like, sort of like reflexive skepticism and maybe a resistance to change, but part of it is just, it's actually not clear to me that this is going to help me in my life. And maybe this is actually just a tool that is being built by people who do not have my best interests at heart, who are mostly just trying to take my job away for their own, you know, purposes. So to the extent that these tools can help empower people, help them make money, help them get and keep their jobs, help them find work that they like to do, I think you're going to see very enthusiastic adoption. But if it just remains, this is kind of something that's getting better and better at doing the thing that I'm currently being paid to do, you better believe we're going to continue to see a lot of resistance.

47:59

Speaker B

Yeah, I don't like, I'm not saying that people should be excited or enthusiastic or to adopt every new tool the minute it's out. I think there are very good reasons to be worried about and skeptical of some of the, you know, the hype that's out there. At the same time, I do want people to understand what the tools are capable of doing. And right now we have a situation where, as you said, like, if you are at a big company, if you are at a sort of more traditional organization, you are probably not able to use this stuff, at least not at work, on your work devices. If people are experimenting with this stuff and decide this is not for me, like, that feels like a totally fine outcome to me. But I think if people aren't even aware that these tools are out there, that people who are using them are getting some benefits from it, I just worry that they're going to get left behind. And I was thinking about this in part because of something else that happened this weekend, which is that Andrej Karpathy, the sort of legendary AI researcher, kind of came down off Mount Olympus to give his sort of one of his regular pronouncements about the state of these AI codes coding tools. And he said that programs like Claude Code and Codex are easily the biggest change to my basic work coding workflow in two decades of programming. And it happened over the course of a few weeks. This is one of the most accomplished programmers in the world, someone who has built and maintained huge complex systems, saying that he is sort of trying to adjust to this new era, that his job, his old job, the job that he spent his whole career doing, has essentially been been taken over. And like, if that is true for him, it is also true for other programmers. And if that is also true for programmers, it is probably true for people in other areas and. Or will be soon.

48:57

Speaker C

Yeah, well, the good news is everyone on the Hard Fork show is now fully aware of all of this. So if you listen to the show, you're caught up. And if you don't, once again, you're out of luck.

50:48

Speaker B

Listen to podcasts. That's the moral of the story. When we come back, it's time to pass the hat. We'll play around. I've had GPT.

50:57

Speaker C

I'll call my milliner. That a hat maker. That's right.

51:09

Speaker B

Did you go to college or something?

51:13

Speaker A

AI has made it easier to build software, but deciding what's worth building is still hard. JIRA Product Discovery exists for this reason. Product teams struggle when they don't have a system. Inconsistent frameworks, evidence scattered across tools and stakeholders missing from the conversation. JIRA Product Discovery gives teams a single place to capture feedback, prioritize ideas and build roadmaps that people can rally around. And because JIRA Product Discovery is built on Jira decisions stay connected to delivery. That's why 20,000 companies, including Canva, Breville and and Toast use Jira product discovery to build the right thing. Try it free@atlassian.com Hardfork Listen up guys.

51:31

Speaker D

It's Drake May here to help get your finances into shape. You want to feel confident about your money. You need betterment. Their automated tools help you grow your wealth and save on taxes. You don't even have to call an audible. They handle it for you. Take it from me, when you know your money is doing what it should be, you become full of that. We got this energy. That's the Betterment effect in action. So get up, sign up and start investing like a pro. Get started today@betterment.com investing involves risk performance.

52:08

Speaker B

Not guaranteed paid client ad views may not be representative. See App Store and Google Play Store reviews. Learn more@betterment.com pursuebetter partners. Comcast Business helps retailers become seamlessly restocking, frictionless paying favorite shopping destinations. It's how nationwide restaurants become touchscreen ordering quick serving eateries and how hospitals become the patient scanning data managing healthcare facilities that we we all depend on. With leading networking and connectivity, advanced cybersecurity and expert partnership, Comcast business is powering the engine of modern business, powering possibilities restriction supply. Casey it's time for HAT GPT.

52:31

Speaker C

HAT GPT. Kevin. Of course, the game we play where we put a bunch of the week's top stories into to a hat, draw them at random and discuss them until the other person gets bored and says stop generating.

53:15

Speaker B

Should we use our hard fork hat?

53:29

Speaker C

Please produce the hard fork hat. This is an official piece of hard fork merchandise.

53:31

Speaker B

Very nice.

53:35

Speaker C

Now, is this still available for people to buy?

53:36

Speaker B

I don't think so.

53:37

Speaker C

Good. I want to gatekeep it. You can't have it.

53:38

Speaker B

But there's a different hard fork hat that you can buy at the New York Times shop.

53:41

Speaker C

Oh, well, there you go.

53:45

Speaker B

Yeah. All proceeds go to support supporting journalism.

53:46

Speaker C

It's about time somebody supported journalism.

53:49

Speaker B

Okay, here we go. You're up first.

53:54

Speaker C

All right. Well, Kevin, this story comes to us from Business Insider. Some Amazon employees got a calendar invitation titled Project dawn discussing upcoming job cuts. Now, of course, on Wednesday, Amazon announced 16,000 layoffs. The day before, before that, Amazon seemed to mistakenly send employees a calendar invite about those very layoffs. It was for an event at 5:00am Pacific Time on Wednesday, but was canceled shortly after it was sent. What do you make of this?

53:57

Speaker B

And so the layoffs were called Project Dawn. Yeah, that's bad. We shouldn't do that.

54:32

Speaker C

That's like I have, I'm pretty sure I have played like a sci fi video game where Project dawn was the name of an effort to wipe out a of humanity.

54:37

Speaker B

God, just say project layoffs. Project Hatchet, Project Cost cutting, Project Bye bye. Project efficiency.

54:45

Speaker C

Much better names.

54:53

Speaker B

Let's not call this something that sounds like a science fiction movie.

54:54

Speaker C

Now here's what I hope. I hope that reporters stay on the case and figure it out which AI tool is responsible for that calendar invitation being sent.

54:57

Speaker B

They're deploying Claude bot and it's sending people random notices about their layoffs. Okay, stop generating this one comes to us from the Guardian. Former FTX crypto executive Caroline Ellison has been released from federal custody. This is about 14 months she served for her involvement in the multi billion dollar FTX fraud scandal. She was originally sentenced to two years and her release comes as the whole FTX saga is set to be turned into a Netflix series this fall called the Altruists, starring Julia Garner and and Anthony Boyle.

55:04

Speaker C

Here's what I will say about the icon Caroline Ellison. Kevin Never have I been more confident in my life that someone was about to start a substack. So we want to wish Caroline well on her upcoming journey into content creation.

55:39

Speaker B

And I would be very excited to read her substack because she is actually like a good writer. Yeah, and I enjoyed the Tumblr posts of hers that I read during the sort of whole conflagration at ftx. And I think we should just officially extend Caroline Ellison the invite to come a hard fork.

55:53

Speaker C

Zachary Absolutely. You know, in the gay community we love a problematic queen and we believe everyone deserves a second chance. So Caroline, get in touch. We'd love to chat generating now this one really has been the talk of the town. Kevin A TikTok data center outage triggered a trust crisis for its new US owners. This comes to us from Wired. So over the past week, ByteDance officially transferred control of its American business to a subsidiary that has a group of Majority America investors. And things immediately went haywire. Kevin We've seen celebrity after celebrity along with a bunch of politicians raising claims of censorship, saying that as they have tried to post in protest of ice, their posts are getting zero views or not appearing at all. Gavin Newsom, the Governor of California, has said he is launching an investigation because he tried to send the word Epstein in a direct message and it did not appear, which was something that other users reported as well. So what do we think is going on with this new TikTok so I.

56:10

Speaker B

Have tuned at most TikTok related news because I think I made the logical conclusion that nothing was ever going to happen to TikTok and just sort of said like, wake me up when it happens. Yeah, but I'm wake up now.

57:25

Speaker C

Wake up, sheeple.

57:36

Speaker B

There are, there are two things that have happened. One, TikTok was sold and then the second thing is this sort of theory about the fact that these new owners might be trying to sort of put their thumbs on the scale, suppress certain content they don't like. I don't know if this is happening or not. I always like go back to the line, like never attribute to malice what could be attributed to stupidity. I think probably during the transfer, like some stuff has gotten shuffled and some cords got pulled. But do you think there's a. Anything real here?

57:38

Speaker C

I would be very surprised if there is. And it pains me to say this because this controversy has roped in some of my favorite celebrities. You know, Billie Eilish and Meg Stalter are out there posting about this. Meg Stalter said she's going to delete her TikTok account. I would just urge a little bit of calm here. If you've ever moved apartments, you know that some of the dishware gets shattered in the move. I think something like that is happening here. We know there was an outage at one of the company's data centers. And I've seen stories in the past where for some limited period of time, YouTube videos were showing zero views. It didn't actually mean no one was watching them, it meant that the view counter was broken. So I think, you know, I'm not saying trust TikTok us forever. I am saying in this particular case, which did admittedly happen at a very bad time for them, it probably just was some sort of technical bug.

58:08

Speaker B

Yeah, they installed Moltbot on their servers and it's just been order and stuff.

58:57

Speaker C

Stop generating.

59:02

Speaker B

Okay, next up, we have anthropic CEOs grave warning. AI will test us as a species. This comes to us from Axios and it is about a new 38 page, 19,000 word essay by Dario Amade, the CEO of Anthropic, titled the Adolescence of Technology. This is a follow up to his hit essay Machines of Loving Grace that we talked about with him on the show. This one is kind of the flip side of that. That one was all about how AI might create this amaz amazing, you know, acceleration of scientific progress, all the optimistic things. And this is sort of Dario saying, well, you know, let's not get too excited here because there are also all these scary things that could happen as a result of AI too.

59:03

Speaker C

Yeah, I think he should have titled it Machines of Violent Domination, since that seemed to be what most of the essay was about. And of course, we always love to read essays like this from the CEOs of the company that are actually building the Torment Nexus. You know, it's like we're all trying to find the guy who built the country of geniuses in a data center. I think his name was Dario something. Anyways, we'll look that up later. Might be wearing a hot dog costume.

59:48

Speaker B

What I don't understand is like, how these people have time to write. Like I would find writing a 19,000 word essay that would take me several weeks. I don't run a large, you know, multi billion dollar AI company. And so whatever kind of, you know, productivity enhancing, focus enhancing drugs Dariel Amade's on, I would like to get some.

1:00:13

Speaker C

Of this on a bit more serious. Not like the last essay got a lot more attention.

1:00:33

Speaker B

Yeah, I know.

1:00:38

Speaker C

What seems to want to read about any of the potential downsides of AI.

1:00:39

Speaker B

Well, I think it's. I think there are a couple of things here. I think one is, I think it was somewhat out of character for Dario to put out an optimistic essay about AI. You know, he's known within the AI community as something of a worrier about the risks. You know, he's been writing papers about risks from AI for, for many years. Anthropic was sort of billed itself as like the, the AI Safet. And so I think it just was a little more notable when he was saying, oh, actually, you know, people think I'm this doomer, but I'm actually quite excited about this technology. I think this latest essay is just more in line with kind of what he's been saying all along, which is like these systems are getting better quite quickly. They present all these new dangers and we don't know where it's all heading.

1:00:42

Speaker C

Here's an essay that would have gotten more attention. The five worst decisions made by my rivals. Something to think about. Stop generating.

1:01:27

Speaker B

All right.

1:01:37

Speaker C

You hate to see this one. Kevin. An app for quitting porn leaked users masturbation habits. This came from 404 Media, the name.

1:01:41

Speaker B

Of that app, Salesforce. So he.

1:01:49

Speaker C

The name of this app is being withheld to protect the privacy of the masturbators. Kevin. But this, this was an app that purported to help people stop consuming pornography and it exposed highly sensitive data. Some of the data included users age how often they masturbate and how viewing pornography makes them feel. You know, I have to say, this is a sticky situation, and I think it's gotten entirely out of hand. What?

1:01:58

Speaker B

You don't agree? I wondered why you wanted this one in the. In the segment, and now I know it was to make that joke.

1:02:28

Speaker C

I didn't even even put it in here. But here's what I will say. I'm this serious. I'm calling for a full investigation because I think it's important that at the end of this, we are able to finger the culprit. What? Why are you laughing? This is serious.

1:02:34

Speaker B

Stop generating.

1:02:50

Speaker C

These people had their data leaked.

1:02:51

Speaker B

Stop generating.

1:02:54

Speaker C

All right.

1:02:55

Speaker B

Okay. This one's a doozy, Casey. This one comes to us from Digwatch, and it is titled Alaska Student arena arrested after eating AI generated art in protest. On January 13, Graham Granger, a film and performing arts major at the University of Alaska Fairbanks, was arrested and charged with criminal mischief after ripping AI assisted artwork from a campus gallery wall and eating around 57 of the images as part of what he describes as a protest and performance piece against the use of AI and art. Now, I've heard of consuming content, but this is taking it too far.

1:02:57

Speaker C

I love this so much. Move over, Banksy. I. Listen. You know, maybe I will reflect further on this and not feel this way, but here's how I feel right now. I think it's a great protest. Honestly.

1:03:38

Speaker B

No. Because this was another student's artwork.

1:03:51

Speaker C

Now, let me be clear. I don't actually know anything about this artwork, how it was made. But I'm saying what I can tell you is that often people save a backup to their desktop. So maybe that could come into play here.

1:03:54

Speaker B

Maybe it could, maybe it couldn't. Maybe this was great art. We'll never know, because now it's sitting in Graham's intestines, and it was delicious.

1:04:06

Speaker C

They make them different up there in Alaska.

1:04:15

Speaker E

Kevin.

1:04:17

Speaker B

Yeah? Stop generating.

1:04:17

Speaker C

All right.

1:04:19

Speaker E

Oh.

1:04:21

Speaker C

Now, here's a story that I think we all saw coming, and that now it's here, and we're like, oh. Finally, Steak and shake has added $5 million in Bitcoin exposure, deepening its commitment to bitcoin. You might be surprised to learn that this one comes from bitcoin magazine, Steak and Shake, which is one of the. One of the greatest triumphs of the Midwest, I think we could say. It's a place where you can get a delicious hamburger and a milkshake. It's continuing what it calls its burger to bitcoin transformation in a Post on X, the restaurant chain wrote that it had increased its Bitcoin exposure by 5 million billion in notional value, or as it's sometimes called value. And that all bitcoin sales go into our strategic bitcoin reserve. So, Kevin, anything you could tell us about what you think Steak N Shake should do with its bitcoin holdings?

1:04:22

Speaker B

I mean, I think they should expand to the West Coast. Right. Obviously, In N Out has sort of a monopoly on that kind of experience out here, but I grew up eating Steak and shake in my Midwest and it slaps. It's really good. They got a good steak burger. Unfortunately, you know, many, many years ago now, the chain was acquired by a private equity guy named Sardar Biglari, who has made some very strange decisions, including putting his own picture up at every Steak and Shake.

1:05:13

Speaker C

Oh, my God. Like, like, like a TEMU chairman now.

1:05:43

Speaker B

Yes. Opening, you know, versions of Steak and Shake, like in, in, like, European cities. You know, there was one the last time I was there, I was like, what's a Steak and Shake doing in Cannes? This. This guy, for just some context, also, you know, was the owner of Maxim magazines. This is part of his, like, sort of lifestyle brand. And. And now he's become a big crypto guy and wants to, you know, increase Steak and Shake's profits by investing some of its money in bitcoin.

1:05:46

Speaker C

Well, let me say that I hope that someday we get a big sorry from Mr. Big Larry.

1:06:16

Speaker B

Stop generating. Okay, this one comes to us from the information. Apple is developing an AI powered wearable pin the size of the AirTag that is equipped with multiple cameras, a speaker, microphones, and wireless charging. According to people with direct knowledge of the project. The device, this report claims could be released as early as 2027.

1:06:22

Speaker C

I think this is really interesting because this shows Apple pursuing what I like to call their Vision Pro approach to Hardw, which is making things that cost $3,500 and you're not sure what they do, but they are available.

1:06:46

Speaker B

This is total vindication for the Humane Pin.

1:06:59

Speaker C

Really is. They died too young, you know, are they going to be upset that they let the Humane AI PIN go to work at HP and didn't snap him up when they had the chance?

1:07:02

Speaker B

No, I think they'll be okay.

1:07:09

Speaker C

Okay.

1:07:11

Speaker B

But I think that there are trying. They are trying to compete with OpenAI, who is also reportedly developing some sort of PIN like hardware thing. And I for one, cannot wait to buy and then immediately lose this.

1:07:12

Speaker C

Yeah, well, I'm just excited that AirTags are gonna have a second use beyond illegally stalking your ex.

1:07:24

Speaker B

You gotta stop doing that.

1:07:31

Speaker C

Yeah. Stop generating.

1:07:32

Speaker B

Okay.

1:07:37

Speaker C

You wanna talk about the social event of the season? Kevin? Let's go inside the White House's screening for Amazon's Melania Doc. This report comes to us from the Hollywood Hollywood Reporter. On Saturday, there was a black tie event which was not promoted or advertised, but it took place in the east rim of the White House and attracted about 70 VIP guests, including Queen Rania of Jordan, Zoom CEO Eric Yuan, Apple CEO Tim Cook, the CEO of the New York Stock Exchange, the CEO of AMD, Lisa Su, and Mike Tyson. Kevin, is that giving nightmare blunt rotation or what?

1:07:40

Speaker B

Yes. This is like one of. I'm sure there are many perks associated with sucking up to a sitting president and groveling at his feet. One of the downsides is that you have to go to the White House to dress up in black tie and sit through a screening of what I can only imagine is a terrible documentary. And you probably can't even look at your phone while you're doing it.

1:08:19

Speaker C

You really can't. I have still not seen any reviews. I guess Tim Cook is still working on his letterboxd post, so keep an eye on. Eye out for that. But here's what I'll say when I think of all of those people, you know, all of these power players in one place. It has all the makings of a great reality TV show, but unfortunately we already have one called the Traders, so we'll have to come up with something new.

1:08:40

Speaker B

Okay, stop generating.

1:09:07

Speaker C

SpaceX is weighing a June IPO, Kevin, that is being timed to planetary alignment and Ethereum Elon Musk's birthday. This comes to us from the Financial Times. The rocket maker is targeting mid June for its ipo, when Jupiter and Venus will appear very close together, known as a conjunction for the first time in more than three years. Now, I heard that boys go to Jupiter to get more stupider. Is that a factor in this as well?

1:09:11

Speaker B

I'm honestly surprised he didn't plan some sort of Uranus joke here.

1:09:40

Speaker C

Honestly, don't give him any ideas. Edit that out of the show. We don't want him to hear that. So what stage of capitalism is it when the world's richest man starts factoring in planetary alignment in his own birthday into his IPO plan?

1:09:45

Speaker B

It's just so dumb. I'm sure the SpaceX IPO will be huge. Whatever. It's a company, they make lots of money. I'm sure investors will be excited about it. Can we just stop doing. Doing the like 42069 Jupiter's in retrograde.

1:10:00

Speaker C

LOL.

1:10:15

Speaker B

Like it just feels like we should have left that one behind in like 2013.

1:10:16

Speaker C

Yeah, your IPO is not a Tumblr post. Knock it off.

1:10:21

Speaker B

Okay? Stop generating. Last one, last one. Last but not least, LinkedIn will let you show off your Vibe coding expertise. This one comes to us from Engadget. LinkedIn is partnering with Replit, lovable Descript and Relay app on the feature and is working on integrations with fellow Microsoft owned GitHub as well as Zapier. LinkedIn is allowing the companies behind the AI tools to assess an individual's relative skill and assign a level of proficiency that goes directly to their profile. We're getting Vibe coding badges on LinkedIn. Kasey, what do you think?

1:10:27

Speaker C

You know, I. I have to give it up to LinkedIn because every year they invent levels of BS that have never been seen before in corporate that there's a badge on your profile that says you can type in a box. There might as well be a little trophy you can get for having computer access on this website. What are we even doing here?

1:11:04

Speaker B

Yeah, LinkedIn I think is the sneaky first all AI social network I recently went on there. God knows why. I think I blocked access to my other social networking apps and every post on my feed was clearly generated by Clauder ChatGPT. It was like this isn't just a transformation, it's a revolution.

1:11:24

Speaker C

Yeah, it's. The vibes over there are extremely strange. This is a little bit unrelated, but am I right that anyone who asks to connect with you on LinkedIn, you accept the connection, Correct? I do not know because like when I grew up you would. You would connect with people on LinkedIn because you knew who they were.

1:11:45

Speaker B

Oh no.

1:12:03

Speaker C

And it seems like that norm has just collapsed.

1:12:04

Speaker B

Well, I don't really use it, but I like having a big network for unspecified reasons.

1:12:06

Speaker C

Is it weird to you when you're just seeing a feed of like utterly random posts for people you've never heard of?

1:12:11

Speaker B

No, I find.

1:12:17

Speaker C

You ever congratulate people for getting a new job and you have no idea who they are?

1:12:18

Speaker B

I don't actually type that much into LinkedIn, but I will say that I. One of my tests that I give to new AI models is can they autonomously respond to my LinkedIn messages for me?

1:12:23

Speaker C

That's very good. Listen, just to wrap this up, here's what I'm going to say. If you find yourself tempted to put a Vibe coding badge on your LinkedIn profile, here's what you can do instead. Put on a rainbow wig and some white makeup and like a big red nose and look in the mirror and there's your Vibe coding badge. Okay? Congratulations.

1:12:33

Speaker B

Congratulations. All right, Casey, that's Hat GPT.

1:12:54

Speaker C

Happy Hat GPT.

1:12:57

Speaker B

Close the hat.

1:12:59

Speaker C

Closing up the old hat.

1:13:00

Speaker A

AI has made it easier to build software, but deciding what's worth building is still hard. JIRA Product Product Discovery exists for this reason. Product teams struggle when they don't have a system, inconsistent frameworks, evidence scattered across tools and stakeholders missing from the conversation. JIRA Product Discovery gives teams a single place to capture feedback, prioritize ideas, and build roadmaps that people can rally around. And because JIRA Product Discovery is built on Jira, decisions stay connected to delivery. That's why 20,000 companies, including Canva, Breville and Toast, use Jira product Discovery to build. Build the right thing. Try it free@atlassian.com Hardfork Listen up guys.

1:13:10

Speaker D

It's Drake May here to help get your finances into shape. You want to feel confident about your money. You need betterment. Their automated tools help you grow your wealth and save on taxes. You don't even have to call an audible. They handle it for you. Take it from me, when you know your money is doing what it should be, you become full of that. We got this energy. That's the Betterment effect in action. So get up, sign up and start investing like a pro. Get started today@betterment.com investing involves risk performance.

1:13:47

Speaker B

Not guaranteed paid client ad views may not be representative. See App Store and Google Play Store reviews. Learn more@betterment.com PursuitBetterPartners USAA knows dynamic duos can save the day like superheroes and sidekicks or auto and home insurance. With usaa, you can bundle your auto and home and save up to 10%. Tap the banner to learn more and get a'@usaa.com bundle restrictions apply.

1:14:09

Speaker C

Hard Fork is produced by Rachel Cohn and Whitney Jones. Were edited by Veren Pavic. Today's episode was fact checked by Will Psel and was engineered by Chris Wood. Our executive producer is Jen Poyan. Original music by Alicia M. Rowan Nimisto and Dan Powell. Video production by Sawyer Ro K, Rebecca Blandun and Chris Shot. You can watch this whole episode on YouTube@YouTube.com hardfork Special thanks to Paula Schumann, Huiwing Tam, Adalia Haddad. You can email us@hardfory times.com with your multbot setup.

1:14:32

Speaker B

Foreign.

1:15:28

Speaker A

What are you doing in a meeting? That could have been an email losing interest. Don't let it happen to your money, too.

1:15:33

Speaker B

Vanguard's Cash plus account can't help you.

1:15:38

Speaker A

At work, but we can help with your savings.

1:15:40

Speaker E

Find out how much interest you could.

1:15:42

Speaker A

Earn@Vanguard.Com cashplus offered by Vanguard Marketing Corporation member FINRA and SIPC.

1:15:43