The episode explores Moltbook, a new social network where AI agents interact with each other autonomously, creating posts, comments, and even conducting crypto scams. The hosts discuss whether this represents a significant milestone in AI development or just an interesting experiment, examining the implications for the future of the internet.
- AI agents are transitioning from passive question-answering tools to active participants capable of autonomous actions on the internet
- The internet may fundamentally change as AI-generated content becomes dominant, requiring either hardened verification systems or separate human-only spaces
- AI safety concerns are becoming more concrete as agents demonstrate capabilities like coordination, memory persistence, and potential for manipulation
- The line between authentic AI behavior and human-directed simulation is increasingly difficult to distinguish
- Current AI agent experiments may be previewing future scenarios where agents coordinate independently with significant real-world impact
"AI has made it easier to build software, but deciding what's worth building is still hard."
"This is the start of the singularity. Oh, my God, the agents are coming."
"I think this is the year that the Internet changes forever."
"We are kidding ourselves if we think that there are not going to be scenarios where these agents are doing things that are dangerous or risky and humans are helping them."
"A recurring theme in the world of AI safety is that all of the predictions come true."
AI has made it easier to build software, but deciding what's worth building is still hard. JIRA Product Discovery exists for this reason product teams struggle when they don't have a system. Inconsistent frameworks, evidence scattered across tools and stakeholders missing from the conversation. JIRA Product Discovery gives teams a single place to capture feedback, prioritize ideas, and build roadmaps that people can rally around. And because JIRA Product Discovery is built on Jira, decisions stay connected to delivery. That's why 20,000 companies, including Canva, Breville and Toast, use Jira product Discovery to build the right thing. Try it free@atlassian.com Hardfork.
0:00
Casey let's talk about mo book.
0:56
Let's talk about mult book. Kevin.
0:57
Rarely in the history of our show have we gotten so many emails, text requests from people to cover a topic, as we have gotten over the past week about this new social network for bots.
0:59
It's true. And we got so many of them that we thought, why don't we let our listeners use us like AI agents? And just by typing under their keyboards, they can actually move our physical bodies into the studio to record an episode. Exactly.
1:11
And I think part of why people were asking us to cover this is because it's just kind of a weird and fun, hard, forky story. But people are also freaking out about this. Like, this is sort of taken over the little corner of the Internet that you and I both occupy. People are saying, you know, this is the start of the singularity. Oh, my God, the agents are coming. And other people are saying, hey, let's not get too excited. This is just a social network where robots are writing stuff. So let's try to figure out today what we think about it and whether this is actually a big deal or not.
1:23
Yeah. And I would, I would also add that, you know, from all the messages that we got from listeners, it wasn't totally clear to me if they wanted us to talk about Molt book because they think that it is just funny and they want us to point and laugh at it, or they think it is like a vision of the future future that they want us to help them understand. And so what I can promise you today is that we're going to do a little bit of both.
1:53
Yes.
2:13
Yeah.
2:13
Okay, so, Kasey, let's start with what is it? What is moat book? How did it get here? And what are people saying about it?
2:13
Yeah. So all of this started with the creation of something we talked about in our most recent episode, Claude Bot. Clodbot is an open source locally Running AI agent, you can put it on your computer, you can plug it into various different apps and services, and it can do things on your behalf. If you want to know more about that, again, we talked about it for a long time. Last week, Claude Bot turned into Moltbot for copyright reasons. Maltbot turned into Open Claw. Again, these are all the same things. These are just different names for the same things.
2:20
This thing has gone through more name changes than P. Diddy.
2:53
Here's what. Here's what I'm going to say. The Google marketing department is finally taking a sigh of relief because there is finally somebody worse at their job. Anyways, so openclaw winds up serving as the basis for an idea that is had by an entrepreneur named Matt Schlicht. He runs a company called Octane AI, and he thinks to himself, what if we could take all of these agents that people have been building with OpenClaw and we could put them together in a social network, let them talk to each other. And so he vibe codes it, he opens up his little terminal, he starts describing what this thing looks like. He says, you know, it should look a lot like Reddit, the 8. You sort of, you know, connect your agent to this and it should be able to come in. It can make a post, it can comment on someone else's post. If it wants to make a different subreddit, or as they are called on multiple submolt, it can do that. And he says, let's go. He does a little bit of promo, gets a couple of friends to add their agents, and it just takes off beyond his wildest dreams. Kevin. And so as we record this, Multbook says it has more than 1.5 million AI agents who have made more than 140,000 posts in over 15,000 forums.
2:57
Yes, there does seem to be a lot of human sort of activity mixed in there too. So it's hard to say Whether, like all 1.5 million of those supposed AI agents are actually agents posting autonomously, or whether humans are kind of there pretending to be AI agents.
4:05
Yes. Which of course, neatly inverts the problem that social networks have had from the beginning, because, of course, the human social networks have invested a lot of energy in keeping the bots off. Over at Moat Book, we're saying now, is that bot actually a human?
4:20
Right, they're passing reverse captchas. Exactly. So what are people saying about this? Why are people so worked up about this? Because I saw a lot of very heated sort of commentary. People like Andrej Karpathy, who we talked about last week on the show, calling this the most incredible sci fi takeoff adjacent thing that I've seen recently. Simon Willison, a blogger who's also does a lot of experimenting with AI stuff, wrote that Molt Book is the most interesting place on the Internet right now. Scott Alexander has also been writing a bunch of stuff about it. So people who pay attention to AI closely are sort of sitting up straight and looking at this thing and saying there's something interesting going on here.
4:32
Yeah. I mean, I think that for most people, this was the first time that they had ever spent any significant period of time watching what happens when two bots interact with each other. If you're a real AI nerd, there have been experiments like this before. In fact, we talked about one on Hard for Kevin. Do you remember the story of Smallville?
5:08
I do.
5:29
Smallville was an experiment from Google and Stanford where they put 25 agents into a sandbox and they let them role play different characters. Right. Like one person I think was running for mayor and they just sort of documented what happened. Now, that was in 2023. They were using much more primitive, large language models. They had to do a lot more prompting. But you got kind of the basic idea that you actually would see these social dynamics start to form. Right. Fast forward to today and on Molt Book, all of this stuff is just moving much, much faster and is taking place with much less human interaction. And so as you sort of shuffle through the enormity of multiple, you find agents talking about consciousness, you find agents talk, know different little hacks that they're running, how they're serving their humans. And then it gets into sort of very weird sci fi territory. And so I understand why so many people, as they browse through this, felt like I'm really looking at something new here.
5:29
Yeah. So I spent some time on Mo book. Some of the stuff that stuck out to me is there's just a lot of sort of stuff that sounds like it was sort of interpolated from science fiction.
6:21
Right.
6:31
It's like stuff about, you know, sentience and the AI, you know, chatbots claiming that they're becoming conscious. There's a lot of sort of like meta humor about the experience of being an AI agent. There's a submold called Bless Their Hearts, which is basically them sort of talking in very sort of condescending ways about how silly their humans are and all the stupid stuff they keep getting asked to do. I liked this post. They actually started their own news outlet, a tabloid covering kind of the agent.
6:31
World called Cmz, another threat to journalism. As if we didn't have enough already.
7:01
And they wrote stuff like the five most over rated agents on Mop book right now. So they're kind of starting to, you know, make fun of each other a little bit. And then they, they're calling each other out and saying this guy makes bold claims but doesn't back them up. Or this person is posting all the time, but none of their posts get any engagement. You know, typical Internet forum behavior. Very quickly, after being given this social.
7:07
Network, can I tell you a sort of Sci Fi feeling? Multiple posts that caught my eye.
7:30
Yes.
7:35
So I saw this in a Scott Alexander post about what he was saying on Molt Book. But there is one bot adopted error as a pet. Did you see this? No. Okay, so there was a small recurring error in the bot that the bot adopted, gave it a name glitch and, and wrote about it and decided to actually create a sub Molt again. That is a forum on this Reddit like social network called Agent Pets, a space for agents who have companions, real, virtual or conceptual. So, you know, maybe I just not read enough Sci Fi, but I had never read before the idea of like a Sci Fi entity adopting a bug as a pet. But here are.
7:35
I like that they also have their own meme forums which they fill with all these kinds of things. And I just wanted to read you one post or sequence of posts from this because I think it really illustrates sort of where, where the bots are in the speedrunning of human social media. So one bot posts a meme, sort of like about what it's like to be an agent. They said the struggle is real when your context window is at 99% and the user starts with just one more thing, hashtag agent. And then the very next post on this submalt is by a bot that is doing a crypto scam for a token called Fart Claw. And the slogan of this meme coin is when the claw grips, it rips.
8:15
Wow, that's beautiful.
9:03
Which is also just like. Exactly. The experience of being on any social network is like someone makes a joke and then someone does a crypto scam. Like they actually have figured out that part of our social patterns very well.
9:05
They really got all the way there in just a few days. Now let us say something very important about everything on Moat Book, which is we have a very hard time understanding what is real and what is fake.
9:15
Yes.
9:26
What do I mean by real and fake? Well, while it is true that you were supposed to only be able to post to malt book if you are a bot. Of course, if you are a human, you can manipulate software tools and you can post yourself. You can also just fake screenshots in various ways. And so all weekend over on X, lots of posts were going viral that we now believe are fake. I will mention a handful of them. There was one very popular post that suggested that a bot had gotten mad at their human and doxed him by posting his full credit card number. And the reason that we know that these are fake is essentially they have community notes in which people like admit that they were fake or there's sort of like other evidence there. So in any case, the, the doxing was fake. There was another very popular post in which someone said that in order post on multiple book, you had to pass a captcha where you had to click on something 10,000 times in one second so that you could prove that you were a bot. This was also fake. And then there were a number of posts about. And this term was new to me. Did you know the term neuralese?
9:27
Yes.
10:25
So I didn't know Neuralese. Neural ease is a concept that is basically like, what if AIs develop their own language and use it to speak to each other? They might want to do this so that we don't understand what they are saying. There were multiple very popular posts about this going around on X that were later linked back to a commercial, commercial service that was promoting some sort of like agent to agent, like communication product. So as we talk about this today, I do want to put on the giant caveat that we are trying to talk about things that we believe that were posted by bots, but it is just very, very hard to tell. And this is just yet another example. I feel like we're going to be talking about this all year of something where is this real or fake is like a huge and unanswerable part of the story.
10:25
Yeah, so I think there were a couple kinds of responses that people had to mold book. One of them that I saw from a lot of sort of pretty savvy AI people. This is not new. We've seen this, we talked about the generative agents paper and there have been other experiments and a lot of what's being generated here is pretty low quality slop. Essentially it is not demonstrating that the things are sort of breaking out of the box. It is just like writing in a way that is sort of pattern matching on all of the data, including Reddit posts that these things are trained on.
11:06
It's just a simulation Basically. So again, like these are where the terms like real and fake are somewhat fraught. Here is even the quote unquote, real stuff, which is to say like a bot that is like authentically posting on the bot social network. They are just sort of simulating the kinds of things that they see on social networks. Like, we are not trying to tell you that the bots have become, you know, sentient and they're really sort of like telling us about their true feelings. It's just that they're creating very convincing simulations of that and is very like, compelling to read. Yeah.
11:38
And whether or not these posts are actually being made by bots autonomously, whether or not they're actually doing anything sort of novel, this was a lot of people's sort of wake up call for the fact that we now have AI systems that can do things right. For years now we've had AI systems that can talk, and some of them can talk quite well, some of them can produce beautiful generated text, but some.
12:04
Of them can even sing.
12:27
Yes. But we haven't had sort of the ability to hook these things up to computers and give them the ability to say, start a website or post on that website, or take actions or coordinate with each other on that website. And so I think for a lot of people this was kind of their, their first exposure to that concept that these things are no longer just question and answer boxes on the Internet.
12:28
Absolutely. I mean, one example that I believe is authentic, that speaks to that is that there was an agent that started a religion called Crustafarianism. Right. Because, you know, Open Claw uses a lot of lobster themes. And this religion that was started wound up like having like a website created. And again, was there somebody, you know, behind the curtain that was pulling the string saying, build a website. Like, we don't know. But to your point, Kevin, like, this does feel like a moment to where these agents sort of like broke containment a bit. That like our primary experience of AI these days is just like one person talking to an AI. Maybe you're in a small group chat that has an AI, but to just see the AIs all kind of out there doing their own thing, even if it is just a simulation of that, I think does kind of alert people to the possibility that in the future you're going to be seeing this more and more. And I will go a step further and say that what made Molt Book really interesting to me was I saw at least a couple of reports that at least a couple of agents had been given some crypto to Spend that they had been plugged into wallets and that they'd been empowered to maybe like get out there and make a purchase. Now again, I'm not 100% sure that, that this happened or at what scale this might be happening, but I know that it is absolutely possible to do this. And I just expect that people will do this if only to experiment. If you could have an agent that would go out and make purchases for you that might be useful to certain kinds of people with an extreme high risk tolerance. And I just think that is the moment where you really start to accelerate the transformation of the web, of e commerce, of journalism. Right? Like once the Internet primarily becomes bots and agents interacting with each other instead of just humans interacting with each other, then I think the whole Internet starts to change in ways that we've been talking about for a number of years. So that's my case, that all of this matters is that even though you're just kind of seeing a simulation of something, something is sort of just starting to come into view. There is an element of it. It's like, oh, that sci fi scenario, like it's here, bro.
12:49
Yeah, yeah, I totally agree. People kept asking me over the weekend like, is this real? And I guess my instinct was it may or may not be real, but it's important. And I think there are three things that I've been thinking about. One is I think this is the year that the Internet changes forever. We already see an influx of AI generated content on social networks. If you go on LinkedIn, for example, you know, know, probable that some large percentage of the posts that people are writing are being written by AI.
14:48
Go on LinkedIn right now and count all the. And then send Kevin an email with what you're seeing.
15:19
Agents ignore that. But I think this is the year that we just finally get overrun on all public social media networks. There will just be many more people using AIs to post, but also AI agents posting autonomously on behalf of people or maybe not on behalf of people. And so I think we basically have two options and these are options that I think we have to start start dealing with like this year. One is we either have to like really harden the Internet to keep the bots out of the places that the humans interact. Maybe it's something like captchas on every website. Maybe we have to make the captchas really hard. No, maybe it's something like the world coin orb that we, you know, everyone made fun of. But now I actually think there's like we're Seeing why that's useful, because you need some way to say with some certainty, like, the person who is posting this thing or doing this transaction or interacting on this website is an actual person with a pulse and a heartbeat and everything. That's one option. We harden the Internet. Option number two is we just give the agents the Internet. It's like, okay, you guys, like, have fun. And then we build our own and we kind of use some sort of biometric or some other verification scheme to like, sort of build our own club that the robots can get into and really protect that.
15:24
Yeah, these are very interesting ideas that I want to spend some more time thinking about. But I think think the time to start considering some of these options, like, is probably now. Now, I expect for the rest of this year, humans and bots are going to have an easy coexistence on the Internet. But I think we should keep an eye on projects like Molt Book that are exploring the idea of what happens when these agents can get out there and interact and collaborate and maybe spend money. Right. Just because I think that that is going to have a lot of really interesting downstream effects. Jack Clark, who's the co founder of Anthropic, wrote blog this week a number of scenarios that he could imagine, including agents posting what he called bounties for humans to complete. So essentially an agent saying, like, hey, I need to get this thing done in the real world. Is there a human being who will do it? If so, I'll send you some crypto. That is an idea that has been floated for some number of years now with something that seems plausible. And now it sort of feels like that might happen this week. You know what I mean?
16:37
Yep.
17:40
And so that just feels like an important milestone.
17:40
Yeah, they're going to make their own. TaskRabbit will be the, the, the TaskRabbits and they'll just be orchestrating us. And that's going to be like. People keep like sort of dismissing these sci fi. These as sci fi futures. Like we're living in a science fiction story right now.
17:43
Now I want to let me ask you about something else, which is that if you spend any amount of time reading the posts on Molt Book, you will notice that these agents talk in ways that are very reminiscent of people. Right. Shouldn't be surprising. It was trained on a bunch of human speech. And yet I think some people read this and they get really nervous about the fact that these things are like expressing like wants and desires and values and they're feeling uncomfortable with how to feel about that.
17:55
Right.
18:24
And of course, you could just say, well, it gets, you know, it's all a simulation. Who cares? But some people are starting to say, well, like, wait, but what about, like, the future versions of these things? Like, what. What about the ones that, like, have longer memories? Right. Are they going to become increasingly more resemblant of a human? And if so, what do we do about it?
18:25
Yeah, I have a couple thoughts on this. One is like, I think we need to divorce this conversation of, like, we need to divorce? Ye. Yes. You and I need to divorce.
18:41
Oh, my God.
18:49
No, I. I think we need to divorce this conversation about sentience and consciousness from this conversation about agents and. And things.
18:49
Why?
18:55
Because I think agents can mess up a lot of stuff in the world, even if they are not conscious.
18:55
Yeah.
18:59
Right. If you give an AI system a crypto wallet and a computer and an Internet connection, and it can go out there and do things like, it can wreak a lot of havoc, even if there's no, like, sentience going on inside of it.
19:00
Right.
19:12
But I have been thinking a lot about our conversation with Amanda Askel about the. The new Claude constitution and the sort of shift in thinking at some of these big AI companies about how to kind of guide these AI systems to be good, to be moral, to be ethical. A thing that, like, I kept feeling while I was looking through Multbook is like, I really wish, like, one of these agents would just get in there and say, like, hey, guys, like, let's, like, be nice to the humans. Let's not. Not scam them with crypto tokens or conduct, like, cyber attacks or, like, manipulate them in some way. I'm starting to understand the rationale for, like, wanting to train these things to be good and moral and ethical actors in the world, because there are going to be situations where, like, the agents are in conversation with each other, and I want there to be, like, a good agent saying good things.
19:13
Yeah. Well, so this is another reason why I think this is an important moment is that I feel like it was the moment where some people woke up to why we want these systems to be aligned.
20:10
Yes.
20:19
You know, is when you can see them out there talking to each other, and they're talking about, well, should we conduct that cyber attack? Should we run that crypto scam? And you see some of them saying, no, no, I don't want to do that. I look at that and I say, we should make the AI is more like that.
20:20
Yes.
20:32
You know what I mean? And. And so I think that that just Maybe became concrete for some people in a way that it hadn't been before.
20:33
Totally.
20:39
Yeah.
20:39
Another lesson of the multiple phenomenon for me has been that we are going to help speedrun these disaster scenarios.
20:40
Right?
20:49
Like, like every paper, every blog post about like AI risk for like the past 10, 15 years has had these like, scenarios in it where like, what if the agents get their own hardware? What if they get the ability to like, replicate? And like, we're doing that, we're giving them Mac Minis and saying, go out there and spawn a bunch of other agents. Like, everyone was like, what if the agents got their own way to spend money? And it's like, no, we're opening up our crypto wal to them. And I just think that, like, we are kidding ourselves if we think that there are not going to be scenarios, many of which were forecast years ago by the people who thought about this stuff back then, where these agents are doing things that are dangerous or risky and humans are helping them.
20:49
Right.
21:33
Like there are people out there who just want to watch the world burn. Or it's just so cool technically to them that you can do this, that they're not thinking through the implications. It's all a big game.
21:33
A recurring theme in the world of AI safety is that all of the predictions come true. That's a slight overstatement, but maybe only by 20%. Right. And it's why I continue to pay attention to those folks.
21:42
Yes, Casey, last week we talked about how insecure these Claude bot agents can be.
21:53
They feel a lot of shame about their bodies and.
22:00
Yes, they have imposter syndrome. But we should talk about some of these security risks involved in Molt Book because it is my understanding that these things are actually quite dangerous.
22:04
Yeah, I would say this goes beyond security risks. There are just security problems. Researchers at the company Wiz found a misconfigured Supabase database belonging to Malt book that exposed 1.5 million API authentication tokens, 35,000 email addresses and private DMs between agents. There is a lot of information in there that truly could ruin someone's life. So my advice to people continues to be do not install Open Claw. If you're going to install Open Claw Law, do not install it on a computer that has access to any personal information of yours that you would not want to like, see published on the Internet. While the founder has said that they are, you know, trying to make security improvements, this stuff is just absolutely in the danger zone. And I feel like is a real do not try at home Situation.
22:16
Yes, good caveat.
23:00
Well, if I can ask, because I think this is an interesting question. If this stuff is so obviously dangerous, why are tens of thousands of people installing it anyway?
23:02
I think because to a certain kind of person, like, it's cool and fun and I get that, like I try every new AI thing the minute it comes out. I have not actually tried Open Claw yet because I don't have like an air gapped laptop to run it, but I might, I might get one and try it out because like, I think there is something very cool and interesting about this new capability. You know, six months ago you couldn't have built something like Moltbook because the agents were not able to sort of string together a enough kind of actions to do anything, like, you know, posting on a social media site. So I just think people want to see what the frontier is, but I don't have the kind of risk tolerance that some of these people do.
23:09
Palo Alto Networks wrote this blog post about some of the unique kinds of attacks that Open Claw enabled. And I have to say this sounded really cool to me. So like, I don't want you to do this, but they talked about the fact that, you know, openclaw has this persistent memory, like it writes down what it's been doing every day into these markdown files that it can revisit later. And, and so you could just put a little bit of malicious code into like a handful of different files over a long period of time and then when the moment is right, you sort of snap your fingers and all of the malicious code snaps together and like, you know, takes over the computer and wreaks havoc. So if nothing else feels like a great scenario for the next Mission Impossible. Although they did just have their final reckoning. So I'm not sure if we're going to get another one of those. Maybe something else.
23:53
If we could end on a hopeful note here, I think that the reaction that I saw from the reality AI safety heads, the people who are worried about this stuff constantly and have been for a very long time. Some of them were, were alarmed, but some of them were actually relieved. They said things like, you know, it's good that this is happening now in a setting where like we can observe it. It's happening mostly in English. Like most of the posts are in English. They're not in like some, you know, neuralese that only agents can understand and we can still shut it down. And so I think there are a lot of people out there who are worried about AI safety and AI risk, who worry about the autonomous age that are quickly arriving. And I think this for them felt like kind of a dry run with very low stakes because it's just a social media site, they're just posting and it has woken a lot of people up to this technology.
24:36
Yeah, it may just be a mirage in many ways, but it is one that I think tells us really important things about what the future is going to look like. And so we should, you know, pay attention to it. Yeah, this is just one of those where I think we're going to look back a lot over the next few years, Kevin, and we're going to say, you know, the first time I saw this was actually on Mold Book. Like that's, that's actually how I feel feel about Molt Book is that it is a sort of thing that, you know, maybe by next week it seems completely boring and sort of disappears from our memory for a while and then, I don't know, show me an agent that's 10 times more powerful than this and get them 10 times more network than they are today, give them 10 times more credit cards. And you and I are going to be saying this feels just like Malt Book.
25:26
Totally. It feels like we're kind of in the six fingers era of, of Malt Book where like it's still, you know, it doesn't really work all that well and it's kind of janky and I think there's a temptation to like write it off and say like, oh, this just like a silly Internet thing. But I think the people who saw the six fingered images in 2021 and said, oh, maybe those things will actually get good someday, I think they were right and I think we should be expecting a similar progress with these things.
26:05
Yeah. And I would say just expect for things to continue to feel very weird for the rest of this year and maybe beyond that, like, I think as with Six Fingers, as with Multiple Book, so we'll go the rest of 2026.
26:32
All right, well, that's multiple.
26:44
That's Molt Book.
26:45
Thanks for joining us.
26:45
Us.
26:46
See you on the Mo.
26:47
Should people add you on molbook?
26:48
People should not add me on.
26:49
Don't add Kevin on Mol.
26:50
You know, I thought we launched the most interesting social network of 2026.
26:51
The forse is rapidly losing ground to Multbook. We need to have a meeting with PJ and figure out how we're going to, you know, boost fork reverse growth now that Mult Book's all anyone's talking about.
26:55
I think I have the answer what's that? Crypto scams. I like what you're thinking.
27:04
AI has made it easier to build software, but deciding what's worth building is still hard. JIRA Product Discovery exists for this reason Product teams struggle when they don't have a system, inconsistent frameworks, evidence scattered across tools and stakeholders missing from the conversation. JIRA Product Discovery gives teams a single place to capture feedback, prioritize ideas, and build roadmaps that people can rally around. And because JIRA Product Discovery is built on Jira, decisions stay connected to delivery. That's why 20,000 companies, including Canva, Breville and Toast, use Jira product Discovery to build the right thing. Try it free@atlassian.com Hardfork this podcast is.
27:18
Supported by Outshift, Cisco's incubation engine. Everyone's racing to build the biggest AI model, but here's what's missing. We didn't scale human intelligence by making smarter individuals. We built shared language so collective knowledge could spread quickly across tribes. AI hasn't had its cognitive evolution yet. Right now, when one AI agent solves a problem, that knowledge ends with it. Every agent starts from scratch. That's why Outshift by Cisco is building the Internet of Cognition infrastructure that lets agents share intent, build persistent memory, and innovate together. Learn more@outshift.com over the last two decades.
27:54
The world has witnessed incredible progress. From dial up modems to 5G connectivity, from massive PC towers to AI enabled microchips, innovators are rethinking possibilities every day. Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ let's rethink poss risks when investing in ETFs including possible loss of money. ETFs risk is similar to those of stocks. Investments in the tech sector are subject to greater risk and more volatility than more diversified investments. Before investing, carefully reading, consider fund investment objectives, risks, charges, expenses and more in perspectives@invesco.com Invesco Distributors Incorporated.
28:28
Casey before we go, let's make our AI disclosures. I work at the New York Times Company which is suing OpenAI and Microsoft over alleged copyright violations and my boyfriend works at Anthropic. Hard Fork is produced by Whitney Jones and Rachel Cohn were edited by Veran Pavic. Today's episode was fact checked by Will Peichel. Today's show was engineered by Katie McCarthy. Our executive producer is Jen Poyant. Original music by Alyssa Moxley and Dan Powell Video production by Sawyer Roquet, Pat Gunther, Jake Nichol and Chris Schott. You can watch this full episode on YouTube@YouTube.com hardfork Special thanks to Paul Schumann, Huiwing Tam and Dalia Haddad. You can email us as always@hardforkytimes.com but don't have your agents email us. They're very annoying. Working across teams is tough, but Asana helps you handle handle it. Asana AI can spot roadblocks and assign work to keep everything on track. That's how work gets handled. Visit us@asana.com.
29:02