Hard Fork

Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good Thing

64 min
Apr 10, 20269 days ago
Listen to Episode
Summary

Anthropic announced Claude Mythos Preview, a powerful AI model capable of finding critical zero-day vulnerabilities in major operating systems and software, which they're restricting to a consortium of tech companies for defensive cybersecurity testing rather than public release. The episode also features an in-depth investigation by New Yorker writers Ronan Farrow and Andrew Marantz into Sam Altman's trustworthiness, documenting patterns of deception and raising questions about his leadership of OpenAI.

Insights
  • AI models have reached a capability threshold where they can autonomously discover security vulnerabilities faster than human researchers, creating an unprecedented cybersecurity reckoning that may require rewriting critical software infrastructure
  • The gap between internal AI capabilities and public access has reopened for the first time since GPT-2, raising concerns about transparency and creating potential for regulatory backlash if capabilities are perceived as being hidden
  • Sam Altman's pattern of telling different audiences different things, combined with his control over OpenAI's board and succession planning, represents a concentration of power in a single individual overseeing transformative technology
  • The cybersecurity implications of advanced AI are forcing a recalibration of responsible disclosure practices, with companies choosing to restrict access to prevent weaponization rather than pursuing traditional open-source models
  • Silicon Valley's competitive dynamics and financial incentives are creating conditions where smear campaigns and information asymmetry coexist with legitimate safety concerns about AI leadership
Trends
AI-driven vulnerability discovery outpacing human security research capacity, forcing industry-wide patching cyclesShift toward consortium-based access models for dangerous AI capabilities rather than public releaseIncreasing scrutiny of AI company leadership and governance structures as capabilities approach transformative scaleRegulatory vacuum around frontier AI development creating liability and national security concernsCompetitive intelligence and smear campaigns intensifying as AI companies race for AGI capabilitiesForced software modernization cycles driven by AI-discovered vulnerabilities in legacy infrastructureTension between transparency/trust and safety/security in AI capability disclosureGrowing expectation that individual AI company leaders will shape geopolitical outcomes
Companies
Anthropic
Announced Claude Mythos Preview, a powerful AI model restricted to consortium access for cybersecurity testing due to...
OpenAI
Subject of investigation into Sam Altman's leadership, governance practices, and trustworthiness; competing with Anth...
Microsoft
Granted access to Claude Mythos for defensive cybersecurity testing; major investor in OpenAI
Apple
Granted access to Claude Mythos for defensive cybersecurity testing; previously acquired Dark Sky weather app
Amazon
Granted access to Claude Mythos for defensive cybersecurity testing as part of consortium
Cisco
Internet infrastructure company granted access to Claude Mythos for defensive cybersecurity testing
Broadcom
Internet infrastructure company granted access to Claude Mythos for defensive cybersecurity testing
Meta
Notably excluded from Claude Mythos consortium access; competing with OpenAI and Anthropic
The New York Times
Suing OpenAI, Microsoft, and Perplexity for alleged copyright violations; hosts Hard Fork podcast
Y Combinator
Sam Altman's former organization where he built relationships and investment portfolio of ~400 companies
Perplexity
Named in New York Times copyright lawsuit alongside OpenAI and Microsoft
The New Yorker
Published 16,000-word investigation into Sam Altman's trustworthiness by Ronan Farrow and Andrew Marantz
NASA
Artemis II mission discussed in 'One Good Thing' segment as inspiring space exploration achievement
People
Kevin Roose
Co-host of Hard Fork podcast discussing AI developments and Sam Altman investigation
Casey Newton
Co-host of Hard Fork podcast providing analysis on AI cybersecurity and leadership issues
Ronan Farrow
Co-authored 16,000-word investigation into Sam Altman's trustworthiness and deception patterns
Andrew Marantz
Co-authored investigation into Sam Altman, discussing board dynamics and governance concerns
Sam Altman
Subject of New Yorker investigation documenting patterns of deception and trustworthiness concerns
Alex Stamos
Former security lead at Yahoo and Facebook; quoted on significance of Claude Mythos capabilities
Dario Amodei
Former OpenAI executive who compiled memos documenting concerns about Sam Altman's behavior
Ilya Sutskever
Created memos documenting concerns about Sam Altman that led to his firing in 2023
Elon Musk
Documented as circulating unsubstantiated material about Sam Altman through intermediaries
Paul Graham
Defended Sam Altman's departure from Y Combinator as voluntary, contradicted by reporting
Sarah Friar
Reportedly excluded from key financial meetings by Sam Altman, exemplifying governance concerns
Fiji Simo
Identified as potential successor to Sam Altman; recently went on medical leave
Christina Koch
Astronaut on Artemis II mission discussed in 'One Good Thing' segment
Adam Grossman
Co-founder of Acme Weather app, featured in 'One Good Thing' segment
Josh Reyes
Co-founder of Acme Weather app, featured in 'One Good Thing' segment
Dan Bruton
Co-founder of Acme Weather app, featured in 'One Good Thing' segment
Solana Pine
Introduced New York Times video content in opening segment
Quotes
"Are we going to have to rewrite all software?"
Casey NewtonEarly in episode
"The entire internet is held together with spit and glue, and we're very lucky that there hasn't been a catastrophe yet."
Kevin Roose (paraphrasing cybersecurity experts)Cybersecurity segment
"There is an extraordinary preponderance of people who emerge from interactions with Sam Altman, including close years long ones, with really active complaints and allegations that he lies repeatedly about things big and small."
Ronan FarrowSam Altman investigation segment
"He's unconstrained by truth and said that he has an almost sociopathic lack of concern for the consequences that may come from deceiving someone."
Unnamed OpenAI board member (quoted by Ronan Farrow)Sam Altman investigation segment
"What if there's a rainbow in my neighborhood? I want to find out about that."
Casey NewtonOne Good Thing segment
Full Transcript
Hi, I'm Solana Pine. I'm the director of video at The New York Times. For years, my team has made videos that bring you closer to big news moments. Videos by Times journalists that have the expertise to help you understand what's going on. Now, we're bringing those videos to you in the Watch tab in the New York Times app. It's a dedicated video feed where you know you can trust what you're seeing. All the videos there are free for anyone to watch. You don't have to be a subscriber. Download the New York Times app to start watching. Casey, I got a haircut yesterday. Thanks for noticing. Kevin, it looks extraordinary. Has this ever happened to you? I went into the barber. I sat down in the chair. He did not ask me what I wanted. He just started cutting. Has this ever happened to you? No, because they know I'm not straight. With a straight guy, you don't need to ask them. You just get the standard haircut that a man gets. He one-shotted my hair. He said, yeah, I've seen this before. I know what I'm doing here. Whereas if I walk in, it's like, okay, let me get out the schematics. It's also not a barber that I've been to a lot. So it's not like he knew me. This is exactly the fact that you just go to random barbers and will accept whoever happens to be it. This is why they can just start cutting your hair. Oh, who is it? Yeah, I don't know this person. Yeah, do whatever the hell you want. See if I care. That is the straight approach to hair. But it's working great for you. Thank you. Appreciate it. I'm Kevin Roos of Tech Columns at The New York Times. I'm Casey Neude from Platformer. And this is Hard Fork. This week, the dangerous new AI model that has cybersecurity experts on high alert. Then New Yorker writers Ronan Farrow and Andrew Moranj join us to discuss their spicy new profile of Sam Altman. And finally, it's time for one good thing. Although I guess really there are two things in the segment. Yeah, we should really rename the segment. Okay. Casey, we have a big announcement. Kevin, what is the announcement? We're ending the show. No, you're finally free, America. No, on June 10th in San Francisco, we are doing the second ever installment of Hard Fork Live. It's too fast. It's too furious. And it's happening. I tried to let them get them to let me call it too hard to fork, but they decided that was not appropriate. Kevin, where can people get more information about Hard Fork Live 2? Okay. It's happening on June 10th in San Francisco at the Blue Shield of California Theater. Bigger venue than last year. Tickets will be on sale at nytimes.com slash events, not today, but next Friday, April 17th. So we're giving you a full week to get your act together, reach out to all your friends, use Meta AI to plan a trip to California. Use cloud code to build your scraper bots to scoop up all the tickets. And on Friday, the 17th, you can buy tickets. And we will just say in advance, last year the tickets did sell very quickly. They did. So get in there quickly if you want to go. There would be more tickets available, but Kevin reserves 50 for quote his team, which I don't even know what all these people are doing at this point, but they'll be there to say hi to them too. So get your tickets next Friday, April 17th at nytimes.com slash events. Well, Casey, as you know, on this podcast, we have a rule about discussing AI models called Ship It or Zip It. Ship it or zip it, unless you're actually putting it in people's hands, we usually do not want to hear about it. Yes, but today we are making an exception for the new Anthropic Model Claude Mythos preview that just was announced, but not released for reasons that we will talk about. But first, since this will be a segment and a show about AI, our disclosures. I work for the New York Times, suing Open AI, Microsoft, and Perplexity over alleged copyright violations. And my fiance works at Anthropic. Casey, this is, I want to say, like the biggest story of the year. I know there's been a lot of AI news. I know that people are probably saying, oh, here they go talking about another model again. I am telling you this is something that people need to be paying attention to because of the implications, because of the way it was rolled out, and because of the model itself, which we will get to all of that. But do you agree that this is a big deal? Well, you know, when we were talking about the show this week, and we were kicking around the idea of like, hey, exactly how big do we think this is? You pointed out that one question people have been asking this week is, are we going to have to rewrite all software? And I feel like usually when folks are kicking that question around, it's a big story. Let's just talk through what was actually announced this week. So on Tuesday, Anthropic announced that it was starting something called Project Glasswing. The name Project Glasswing refers to the Glasswing butterfly, which has transparent wings, and so it can hide in plain sight. And that is thematically important for reasons that we will come back to. It's also a delicacy in some countries. I've never had Glasswing butterfly. Oh, you've got to try it. So notably, they are not releasing this model to the public because they claim it is too dangerous to do that. Instead, they are giving access to a consortium of tech companies, including Cisco, Broadcom, sort of makers of internet infrastructure, as well as Microsoft, Apple, Amazon, basically every big tech company that is not open AI or Meta is getting access to this model, but not general access, just access to do defensive cybersecurity testing, basically to go out and harden their systems and their infrastructure and their software before the general public can get its hands on this model. So what are some examples of what Mythos was doing in training that's so alarmed Anthropic that it came to this point? So Anthropic has been running this model internally for several weeks now, and they claim that this thing has found vulnerabilities in every major operating system and web browser. They gave some examples that have already been patched. One of them was that this model apparently found a 27-year-old security flaw in OpenBSD. OpenBSD is an open-source operating system that runs on firewalls and routers. It is sort of like a critical security layer on the internet, and it was designed specifically to be hard to hack. And this model, because of its advanced coding and reasoning capabilities, was able to find this bug that 27 years' worth of professional security researchers had not been able to find. What else? Another example was that it found a bug in a piece of popular open-source video software called FFMpeg that had, according to Anthropic, been scanned for bugs five million times by automated security tools without finding this critical exploit. And that's why it's important to always look the five million in first time, because you might find something. Now, Casey, I think for people who are not cybersecurity experts, it might be worth sort of sketching the context here for how software works. So, every piece of software, every operating system, every app, every web browser that people use, is built on a mixture of tools. Some of those tools are proprietary to the companies that make the software. Some of them are sort of shared open-source tools that are just in everything. Companies will just grab this open-source thing and plug it into their thing. Because that's compatible with everything else, saves you a lot of time and trouble. It's already been security tested by decades, sometimes, of researchers. And this is sort of a big piece of kind of the foundation layer of the internet, are these open-source software projects. What is happening now, according to Anthropic, is that they can basically use this model, Claude Mithos Preview, to sort of proactively go out and find all of the unfound bugs they call these zero-day exploits, with a sort of speed and efficiency that no human security research team could do. Yeah. And I would say that it can be difficult to talk about cybersecurity in a way that resonates people for a couple of reasons. One is just that cybersecurity as a field exists essentially almost entirely to alarm people and say, here are a bunch of problems and these are really scary. I hope that folks in the cybersecurity field would not mind me saying, it is just kind of an alarmist profession and that when I've talked to these people over the past 15 years, they've been telling me, look, the entire internet is held together with spit and glue, and we're very lucky that there hasn't been a catastrophe yet. So after all of this news came out, I was like, I want to talk to some people who are at least not working for Anthropic or this consortium to try to give me a gut check on how big a deal this is. And so I talked to Alex Stamos, who formerly led security at Yahoo and then Facebook. And Alex said, yes, this is a big deal. And he was hoping for a long time that we would see a consortium come together like this because of exactly what you just said, Kevin. The intelligence in these machines and their ability to work autonomously are now great enough that they can chain together exploits that human beings either would never see, would take them a long time to see, or they would just never get to because we're limited in ways that these machines are not. So that got my attention. Now, we should also talk about what the strategy is here for Anthropic, because I think a lot of people see an AI company that is known for being alarmist about safety say, we've created this powerful, spooky new model, and we're not going to show you because it's too powerful and spooky as some kind of marketing tactic. So I think we should just say that is not, to my understanding, the case here. No. In my mind, it is obvious why. Like, if you're a corporation and you release a tool and people with no real technical expertise are able to use it and within a few hours discover a novel exploit in the Linux kernel and then take over other people's machines to cause crimes, you might be held liable as a corporation. You will get in trouble. Like, there will be congressional hearings. So companies just in their rational self-interest do not want to sell cyber weapons on the open market. Yes. It's also like, if this was a marketing strategy, it is a horrible marketing strategy. Like, the government already thinks you're a bunch of panicky doomers. You have a new model that you claim is the most powerful model in the world. So instead of selling it, you give $100 million of Claude credits away to a consortium of companies that includes many of your competitors, which is what Anthropic is doing. That is not how I personally would market a spooky new model if I were in the business of marketing spooky new models. Now, look, it may be that despite everything that we just said, there is still some marketing benefit to Anthropic from doing this, right? Like, we know that they saw a huge increase in their revenue after they took that stand against the Pentagon. And that is in that stand, they said, like, we are determined to do things in a really safe way. It seemed like the business world really liked that. And so I could imagine there being a business benefit to Anthropic of coming out and saying, we have the most powerful model in the world and we're not releasing it. Like, yes, I'm sure that there are plenty of businesses that are salivating over the chance to get their hands on it. But they can't unless they are part of this consortium. So they are at least claiming that they are trying to get ahead of what they envision will be a reckoning. It was what was the word they used for cybersecurity. And it seems plausible to me that in the next kind of six-ish months, every major piece of software in the world is going to need to be patched, rewritten, and re-released. So just an absolutely massive project. Let me ask you this. You know, Alex Stamos, the security expert that I mentioned, told me that he sees essentially like two broad possibilities. One is, and this is the good scenario, there are a finite number of critical bugs and vulnerabilities to be found. And that maybe if we all work really, really hard over the next six months or however long it turns out to be, we will be able to patch those vulnerabilities and our infrastructure will remain safe and stable. The other possibility is that this model is already good enough that it can just simply invent exploits that we never would have thought of. And so this will essentially just be a really, really big problem that potentially just keeps growing in scope because, you know, maybe eventually you hit some sort of true super intelligent point. So I'm curious if you've talked to people about what they see the scenarios are and if you have any thought as to which of those two is more likely. So I think it's possible that they will patch this sort of top 1% of critical software, right? The stuff that everyone knows is important. Your Linux, your very popular open source libraries, your routing equipment and networking equipment. Like it seems plausible to me that a couple of companies with the right resources and the right models could like find and fix the worst security vulnerabilities. But I also talked to people who were telling me that it's not as simple as that because once you get outside that kind of top 1% of critical infrastructure, there's just a lot of machines that are running on old code, right? So it's theoretically possible that all of these fixes could be submitted to the people who maintain these software projects, but that A, there aren't enough humans to review all of the proposed bugs and fixes. So they just sort of as a human bottleneck there or that there is just a lag in the time between when a piece of software is patched and when the person running the router at the medium size business in Tulsa decides to update the firmware or install the security patch. So people can expect a lot of apps that are asking them to update their software or reinstall their software over the next few months. I've started getting a few of these already. Have you started getting these? Yeah. So I think this is going to be a kind of forced reset for the entire cybersecurity industry and a very significant event in the history of technology. Yeah. Well, and just to make it concrete, like we are currently at war with Iran and Iran is currently hacking our critical infrastructure. There's a story in Wired this week about them successfully hacking like water and energy infrastructure. Right now they're able to do that without a mythos quality model. I would be quite nervous about what they could do if something like that fell into their hands. So this really is not an abstract concern that we're laying out. Right. And we should talk about this government piece of this because one weird characteristic of this moment is that this very powerful advanced model that Anthropic Claims is capable of doing autonomous cybersecurity research and attacks is also a company that the U.S. government has spent the last several months trying to kill. And has tried to declare Anthropic a supply chain risk. They have ordered all federal agencies to stop using Claude. And so my understanding is there have been some conversations between Anthropic and parts of the national security establishment and apparatus about this model. But it is also simultaneously true that they cannot use this model without sort of running a foul of the administration. So a private company right here in San Francisco currently has a technology that they claim is capable of finding critical security vulnerabilities in every major operating system and web browser in the world. And the U.S. government to my knowledge does not have access to this technology. Yeah. It does seem like something that like our national security infrastructure would want to have access to. One more piece on the regulatory front. It is crazy to me that model development of this scale and seriousness remains essentially unregulated in this country. Right. Here you have a private company saying well we have now created software that can create so many different kinds of novel exploits that all software might have to be rewritten. And they are not really under any kind of regulatory regime. And the regulatory regime that the previous administration tried to put into place was thrown out by the current one because it might harm American competitiveness. So I just want to say that makes me really, really uncomfortable. I think that if you're making stuff this powerful regulators ought to be paying attention. Yeah. One interesting sort of historical note that I'll make here is like for the past few years at least there has not been a significant gap between what the AI companies have built internally and what the public has access to. Yeah. You know maybe there's a slightly better model that the companies are working on that they need to spend a few months testing before they release it. But like or it runs a little faster than the one that you have access to. Yeah. But there has not been kind of a significant gap since I think GPT-2 which was in 2019 which involved some of the leaders of Anthropoc who were then at OpenAI who made a decision to hold back this model GPT-2 out of fears that it could be used for things like automating propaganda and misinformation. Right. In reality it could barely write a limerick. Yes. You know. They aired on the side of caution. They did. And they got a lot of crap for that. People sort of said oh you're using this to hype some of the same stuff we're hearing this week about Anthropoc. And I think in that case they were you know probably a little over excited about what this model could do. But they wanted to make sure that they weren't wrong. And so they held this back and that created a gap of at least a you know a couple months to maybe a year between what the average person could see and what was happening inside the AI labs. That gap is now open again. There is now a model that you and I cannot use that our listeners cannot use unless they work at one of these companies in cybersecurity defense and what the AI companies are claiming. And I think that is just a very tenuous situation. And I don't like it but I also understand why I think in this case this was the right decision. Well what do you mean when you say that it's tenuous then. I think as hostile and suspicious as people feel toward the AI industry that only gets worse if they think that there are secrets being kept in a basement that they can't access. And I think that it creates paranoia and fear. I think that it is generally responsible to have transparency from the AI companies about how capable their models are. And I understand in this case that Anthropoc felt like it had to make an exception. But I think this this this gap may be here to stay is the thing that I'm wondering about. I think it probably is. I mean it's worth saying that Anthropoc was founded on the idea that if it could build models that were at the state of the art at the frontier that it could have some influence over that frontier and it could guide it to a safer place than it otherwise might have gone. To me the Pentagon fight and now mythos are examples of that thesis in action right where it made the best model and that gives it some room to try to do a little bit of good. So you know blocking domestic surveillance and autonomous weapons for a little while or preventing bad actors from getting their hands on you know tools that could create novel exploits. At the same time in order to do that they had to build the model in the first place. And there is a risk that there is some sort of I don't know intellectual property leakage that sort of somehow all of the innovations that they're building are going to trickle down into other places. And my fear is just that it becomes this sort of self-fulfilling prophecy right where we have to build this frontier even though it's dangerous and we're going to guide it to this safer place. But you know you did build the thing in the first place. So I just like reminding people of that tension because it is not actually inevitable that we build these systems and yet we do often act as if that were the case. Last thing a lot of the people I know who are plugged into the cybersecurity world are being asked right now what people should do about their own security if they are worried that models like this will become public. It's funny. Should they be like locking down all their accounts and moving their cryptocurrency into cold storage? Like what do you think people should be doing in anticipation that something like this will become public? You know it's funny. I had a friend ask me that just this morning as I was preparing for the podcast and I said a couple of things. Like one to some extent we're just going to have to wait. I mean to the extent that any of what we've just described is good news it is that the defenders appear like they're going to have some runway to fix some really bad problems before the bad guys catch up. So I think we should give them a little bit of room to see what they can do. If it does emerge that there is a similar model that can wreak havoc like rest assured there will be segments about it on hard fork and we'll have some updated guidance. But I asked my friend do you have a password manager and do you reuse passwords for the same thing? And she said you know I've never really been able to get one of those password managers to work for me and I do sometimes reuse my passwords. So I said like look if you're looking for something that you can do just make sure that you have done your basic online cybersecurity hygiene. You should use a password manager. I use one password. There are many of others out there that are just as good. Don't use the same password for anything. Your passwords should be randomly generated and not you know the name of your pet or whatever. And then use multi-factor authentication where you can. So don't let anybody get into like your Gmail or your banking account just by typing in eight letters. You should also be using an authenticator app. And so those are some of the basic things that I would tell people to do Kevin. Yeah. I am planning to deal with the possibility of a massive cybersecurity breach by just sort of selectively dribbling out incriminating things about myself. Just sort of trying to get ahead of any hacks that might expose my emails going back decades or anything like that. So I'll just say in that spirit I used to like the black eyed piece. And I still do. Let's get it started. Now that was a critical vulnerability that I just exposed. When we come back we'll talk to New Yorker writers Ronan Farrow and Andrew Morance about their investigation into Sam Altman. I also sent them some stuff about you. Oh boy. So this is a paid message from GoFundMe. My name is Ashley Kane. I'm the daddy of a little girl in heaven and a father to two boys on earth. I've got an incredible relationship with GoFundMe both personally and via our daughter's foundation the Isaelia Foundation. GoFundMe has allowed me the foundation and thousands of people out there to give hope to what is in need. You'd actually be surprised how many people out there are willing to show love and support you in your time of need. My advice for anyone that needs to start up a GoFundMe would be do it. You don't need to feel shame. You don't need to feel guilt. You don't need to feel embarrassment. If you need GoFundMe, use GoFundMe. Start your GoFundMe today at GoFundMe.com. That's GoFundMe.com. This message reflects one person's experience. I'm Dane Brugler. I cover the NFL draft for the athletic. Our draft guide picked up the name The Beast because of the crazy amount of information that's included. I'm looking at thousands of players putting together hundreds of scouting reports. I've been covering this year's draft since last year's draft. There is a lot in The Beast that you simply can't find anywhere else. This is the kind of in-depth unique journalism you get from the athletic and the New York Times. You can subscribe at nytimes.com slash subscribe. Well, Casey, the talk of the town in San Francisco this week has been, well, there have been two talks of the town. One we already covered in our A. That was the Claude Mithos. This town conducts multiple conversations at the same time. Amazing at multitasking. The other big talker this week has been this big piece in The New Yorker about Sam Altman. Yes, more than 16,000 words devoted to a question that has come up once or twice on a hard fork, Kevin, which is can Sam Altman be trusted? Yes, the writers on the piece are Ronan Farrow, famous for his work on the Harvey Weinstein investigation and others. And Andrew Morance, who is a good friend of mine and a longtime writer at The New Yorker. They worked on this piece for a very long time, talked to many, many people in and around Sam's orbit and tried to answer the question of like, who is this guy? Yeah. And also, why does that matter? Right? We're talking during a week where these systems have arguably experienced a step change in what they can do. And I think those kind of advances just naturally should draw more scrutiny onto the people running these companies. What do they know about who they are, how they operate? Are they honest with each other? And this piece offers one of the more comprehensive portraits that we have had so far, I would say, on that question. Ronan Farrow investigating, you have to be one of the scariest experiences. You pick up the phone, it's like, hi, it's Ronan. But it seems hot too. That's what everyone wants is just a really handsome man asking them a lot of questions. Okay, so let's bring in Ronan Farrow and Andrew Morance. Ronan Farrow and Andrew Morance, welcome to Hard Fork. Thank you guys. Happy to be here. I mean, truly long time, first time. And in fact, I brought receipts to that effect. This is your show, you can take or leave this in the edit, but I wanted to show what a devoted longtime fan I am of Hard Fork. I know the show well. I know you guys like merch, and I know you guys like disclosures, but you don't have any disclosure merch to my knowledge. So I had these made for you. Come on. One for each. One for you, one for you. I'm going to put it in the mail after we get off, but one of them says, I work for the New York Times, which is suing open AI, Microsoft and Perplexity for alleged copyright violations. The other one says, and my fiance works at Anthropic. Oh my gosh. That is amazing. So I mean, time limited, it's going to be a time capsule. But I mean, made at the print shop in Brooklyn, one of a kind exists. Wow. That's incredible. You are a hero. And I also, I think I should also make. Is this payback for when I gave you a hat at your wedding? And I gave you one at your wedding, so I think we're even. That's true. We have a sort of a theme going on here. Okay. Right. Well, and that's also our disclosure, which is that Kevin and I are buds and have known each other forever. So actually, Casey, you can come to me anytime. I know you guys like to rib and roast on the show. So you can come to me behind the scenes for any roastable Kevin material. My dream has been to get the New Yorker to investigate Kevin Roos. So you guys really could not have come along at a better time. We're on it. Don't tempt us. I'm not picking up the phone. Yeah. Okay. Let's talk about this big piece that you both just published in the New Yorker. The title of the piece is Can Sam Altman Be Trusted? Now, usually there's this sort of folk rule about headlines that end with question marks, which is that the answer is always no. So I want to put this question to you. Can Sam Altman Be Trusted? Well, I think one important thing to note is the piece is really forensic and even. And actually, to a point where I've been happy to see there's a range of reactions, right? There's people who have answered that question in a very severe way and looked at the fact pattern that is laid out here and the documentation that's laid out and said, you know, this is someone who poses an acute danger and should be kept away from an authority position. And then there's people who, I mean, hilariously enough, my mother called me and she's like, you know, I kind of like him. And so I think that is a true reflection of our intentions. In this case, as you might imagine, there was deep consultation with all of the subjects of the reporting to really understand their feelings. And anytime we thought there was a persuasive argument from Sam or anyone else that, you know, something shouldn't make it in or something would be sensationalist, we really carefully discussed that editorially. So the result is very even. And I would say on the question itself, what we lay out is something that is remarkable, I'd say, even against the backdrop of the culture of mistrust in Silicon Valley, where everybody understands and expects, right, that being a founder means telling different audiences different things at times, to some extent, where everyone understands that the entire enterprise is building based on hype long before there is actual actionable deliverable product, even against that backdrop, there is an extraordinary preponderance of people who emerge from interactions with Sam Altman, including close years long ones, with really active complaints and allegations that he lies repeatedly about things big and small. Well, one of my favorites was when you quote him telling you that he wears a gray sweater every day to avoid decision fatigue, and then he shows up for his next interview in a green sweater. That felt like a really satisfying detail. That was just for you, Casey. I was wondering if you were going to catch that. I appreciate that eye for fashion, that you so rarely get in these tech profiles. Andrew was our fashionista in the writer's room. But that's the kind of thing where we didn't want to make too much of that, right, because it's like, oh, we caught you in this deep hypocrisy of choosing a green sweater. And this is consistent with a lot of the things people say throughout the piece and throughout the career of Altman and OpenAI is that there isn't this one smoking gun thing where he's caught with his hand in the cookie jar. It's this allegedly longer, more subtle accumulation of facts, which my kind of glib and annoying way of describing it is the fabled memos and documents that were compiled that led to him being fired in 2023 and that have kind of dogged him throughout his career. They really shouldn't have been like a secret bullet pointed list. They should have been a 16,000 word New Yorker piece because they only really make sense when you lay them all out together in narrative form. Yeah, I mean, you guys mentioned in your story that there have been sort of these rap sheets that have been circulating about Sam inside OpenAI and other parts of the AI industry for years. One of them was compiled by Dario Amade when he worked at OpenAI under Sam Altman. One of them, you said, was maybe circulated by some allies of Elon Musk and people who are opposed to OpenAI. So give us some sort of behind the scenes details about what is being said by whom and how and to what ends about Sam Altman in Silicon Valley. Well, it was really important to us to filter for the obvious competitive incentives out there. There are people who are massively incentivized to go after Sam Altman. And the reality is that there are very firmly evidence-based critiques, many of which are promulgated not just by the rivals, although they're certainly amplified by them happily, but also by more neutral figures and people who are just kind of technologists who aren't in the fight. And then there is the White Hot Center of the rivalry, the stuff you mentioned that I think is in a very different category, which is Elon Musk and other direct competitors really amplifying everything they can come up with. And in some cases, we document things that are inflated or trumped up or just seem to not be true. So Elon Musk in particular has intermediaries circulating some pretty spicy and pretty unsubstantiated material in Silicon Valley. And we talk about that. I really appreciated that about the piece because this has become like more salient over the past year as these rivalries heat up and you hear more and more of these scurrilous rumors. And while I do think this winds up being a pretty damning portrait of Sam on the whole, you do also point out that in some very real ways, he's the subject of legitimate smear campaign. Yeah. Oh, yeah. I think that's absolutely accurate. And we were trying not to go in, you know, with naivete of like, can you believe business titans are being mean to each other? But like the level of this really does seem kind of shocking and unprecedented. And, you know, it's kind of consistent with people who think of this as like, whoever gets the ring first will control the world. Like it just seems like all bets are off. And so as a reporter, it's very challenging to be like, do you bring up the scurrilous rumors to knock them down? And so we had like months of conversations about how best to do that. So there's been a lot of reporting on Sam Altman, especially around the board, who a few years ago, could you maybe give us like the two or three things that you think are new and important from your reporting that rise above the rest in terms of people's understanding of Sam Altman and open AI? So I think there are things here that put to rest some of the longstanding rumors, right? I mean, Altman has always said, and Paul Graham at Y Combinator has always said, he was not pushed out, he left of his own volition. It really seems from our reporting that that was not the case. They have talked a lot about their fundraising in the Gulf in the Middle East as innocuous. All businesses do this. It really seems from our reporting that the relationships that Sam has cultivated with some Emirati and Saudi Royals is deeper than was previously realized. Ronan, what am I missing? There are several things like this. We just didn't really know in full what was in those Ilya Sutskiver memos. We didn't really have the detailed multiple sourced, heavily documented accounts of the individual proof points that were offered in those memos. We didn't have the contents of those Dario Amade notes, and we didn't have a lot of these people on the record yet. So I think actually in a way that was a disservice not only to Sam's critics, but also to Sam himself, there was a bit of a veil of mystery. And that wasn't purely accidental. One of the things we document that's new here is as a condition of the exit of the board members who had moved against Sam that he wanted out, they insisted on an outside investigation. What happened there is, in my view, quite extraordinary, which is yes, at private companies, sometimes reports of this type when a law firm is brought in to restore legitimacy can be kept out of writing. Often it's to limit liability. And often legal experts say it's a bit of a red flag. This is a different kind of case. This isn't just any private company. This is a high-profile scandal that engulfed Silicon Valley when Sam was fired. And ostensibly at a non-profit. At a 501c3, exactly. And so there were stakeholders not just in the public, but within this company. That would be the bare minimum threshold, right? Where senior executives thought, okay, we're going to get some kind of at least detailed summary of what this law firm investigation found when they invoke it to rubber stamp Sam coming back. And instead what happened was an 800-word press release that said there had vaguely been a breakdown in trust and offered very few other details. And what we reported in this piece for the first time is there wasn't a report. For years people were like, where's the report? Where's the report? There wasn't a report because it was kept out of writing. And this is no longer just a speculative supposition. One of the two board members who Sam helped select who oversaw this process just explicitly says, well, a written report was not needed is now their line on this. Yeah, I'm glad you brought it up. It was actually my favorite detail in the piece because it was something I'd been curious about forever. I mean, the thing that I found most interesting from the piece were the people who spoke on the record or at least gave you quotes and some of them were unattributed about Sam who I think previously might have supported him or at least felt like there was no upside in sort of talking about him in a negative way in public. There was a Microsoft executive quota in your piece as saying that there's a small but real chance he's eventually remembered as a Bernie Madoff or Sam Bankman-Free level scammer. There's another unnamed board member who said, quote, he's unconstrained by truth and said that he has, quote, an almost sociopathic lack of concern for the consequences that may come from deceiving someone. I haven't been on a lot of corporate boards, but I think that is something that's quite rare to hear a board member say about a CEO of a company. I'm just curious, like when you were weighing these statements, did you feel like there are people who used to be fans of Sam who have soured on him or are these people who have really held a grudge against him for a long time? The thing that you point out about people changing their tune over time, I think is an integral part of what we document in the piece, which is the fact that Sam Altman comes up through this Y Combinator world is not incidental. The fact that he has an investment portfolio in by his own estimation about 400 other tech companies. The fact that he has sat on everyone's board and everyone has sat on his board, I think our sort of line about this in the piece is like, we spoke to people who are Sam's friends, Sam's enemies, and given the mercenary nature of Silicon Valley, some people who have been both. So given that that's the landscape, you are going to have people who change their tune as the wind blows different ways. And that's a lot of how Altman has been able to weather a lot of this stuff in the past. One thing that results from that spread of opinions is to your question about evolving takes on Sam. There's definitely a class of nuts and bolts investors, prominent people in Silicon Valley who are really pragmatists, not just safetyists, and who are growth and business oriented, who told us that at the time of Sam's firing of the blip, they gave him the benefit of the doubt. And especially because of the factor we talked about before where there just was a dearth of clear information. In that void, a lot of prominent people gave him the benefit of the doubt and saw only upside in bringing him back and removing the board that tried to fire him. There are a number of those prominent people in that category now who say, I don't know that I would have given him the benefit of the doubt if I knew everything then that I now know. It just strikes me though that everyone who digs into this winds up coming back with essentially the same story. You know what I mean? It's like there are not like 17 versions of Sam Altman out there, like depending on which reporter calls which different source. I feel like we now sort of know the broad outlines of this person's psychology. I don't know. I want to challenge that. I do talk to people who are big fans of Sam, some of whom work for him, some of whom don't. Clearly this is a guy who has been able to, at various points, lead very important technology projects and rally people behind a vision. These people are not mindless sheep. They're critical and discerning and thoughtful people. So I don't want to seem like I'm taking Sam's side on anything, but I think that there are a lot of people with very strong feelings about Sam Altman, positive and negative. I think the positive side tends to be more like people defending him in private and the public side tends to be more people criticizing him. But I don't know. I guess for Ron and Anand, do you feel like there are vocal supporters who you came across in reporting the story who had no direct employment relationship with OpenAI or Sam or weren't leading companies that he'd invested in or something who were like, yeah, this guy seems pretty good and smart and talented? Yeah, I was an 11-year-old who used ChatGPT to pass sixth grade. Oh my God. There were legit defenders of Sam on a number of these fronts who we talk to for sure. I think a lot of this has to do with what baseline expectation are you starting from. If you think of this as a business and you start from the premise that people who run giant successful businesses have to say a lot of different things to a lot of different people, why is anyone even, why is this a story? I think though there's a level setting here where one of the things you can do when you take a big sort of putting everything in one place narrative effort like this is you can start from the beginning and remember what the original pitch was. And when you go back to what the original pitch was, the defense of why are you guys being so naive? This is a normal competitive business. Like, okay, so when you pitched this as a nonprofit safety focused research lab that would aggressively comply with all regulation, were the people who believed that naive to believe it at the time? So that's when the defenses start to feel a little more pressured to me. Yeah, also for what it's worth, it's like, oh, is it really a story that this guy's telling different things to so many different groups? It's like, that's not really a story that gets told about Satya Nadella. It's not really a story that gets told about Suna Pichai. It's not really a story that gets told about Tim Cook. There does seem to be something really unusual here. And my question for you guys, now that you've sort of spent so much time immersed in this company, is what do you think it means for open AI? Well, I mean, luckily we have a really robust independent tech media to, you know, so I was going to tune into TVPN and see what their independent journalistic take on this would be. Do you want to give listeners who may not be familiar with what you're talking about, some context? I think the day after our piece closed, Ronan, or something like late last week, Open AI acquired TVPN, which is this big sort of tech chat show. So that's one aspect of this answer, right? That as open AI expands and grows, they seem to be sort of buying up more of the press infrastructure to tell their own story. Relatedly, by the way, a lot of announcements over there, right, concentrated around when they knew we were going to be running and right, developed in the period where we were in these intensive conversations with them. And many of them sort of pointed at the topics in the piece. They announced this new safety fellowship that's very airy. They announced this new governance plan that's very sort of airy and ethereal, but are meant to, I think, occupy space in the conversation on the same topics. And look, I mean, everyone, Ronan, you should say more about this, but everyone, including Altman and the Open AI execs we spoke to, recognizes the economic pressures here. I mean, I think you guys were there when he said, oh, yeah, it's definitely a bubble and someone's going to lose a phenomenal amount of money, right? So even putting sort of the sci-fi, Skynet stuff aside, the economic pressures are unavoidable. And a lot of it has to do with this sort of pitchman, rhetoric, the exact thing we're talking about, right? Because these things are contingent. It's not like, oh, will it be a bubble or not? It's like, how hyped up will the cycle get is a byproduct of how people like Sam go around the world talking about it? Yeah. I want to ask sort of a basic question that I think people have probably raised with you, which is like, why does it matter who Sam Altman is? If what we are talking about is a technology that could have profound implications on national security, the economy, potentially the future of humanity, it doesn't seem obvious to a lot of people why it matters who is running these companies. Because a very nice person who is very honest and very transparent in all their dealings could still release a rogue superintelligence that blows up the world. And a very manipulative person could release a very aligned model. And so what we should be paying attention to are the models themselves, not the people running the companies that make the models. I'm not saying I believe that, but I'm curious, what do you make of that argument that we are focusing too much on the humans and not enough on the technology? We probably both have thoughts on this. I think I have two. The first of which is it's worth noting that while reasonable minds could perhaps differ on the question you just posed, the answer provided by Sam Altman and the founders of Open AI was very clear, which is actually part of the way the entire enterprise was structured when it was founded as a nonprofit was they talked a lot about avoiding an AGI dictatorship. They really believed that actually the person who gets there first and has the most power over this technology is pivotal. The individual integrity is formative to the way the technology goes and the way it's controlled and the way it's used. The other thought that I have is in my mind, you raise a valid point and more significant than any of this is the structures around these individuals. We have a technology emerging that could really affect us all in all of the existential ways you just mentioned. And we don't have the regulatory guardrails to keep an eye on these folks. We are completely ceding the power to these individual companies and their whims, the mud fight between them, the quality control that each of them has or lacks. I think that to me is the big question. And the integrity of an individual figures in that and it's important, but it reveals the weaknesses in the system. If you have someone who potentially lies all the time, could in the eyes of many critics be a danger, the important thing is to have the structures that account for that. There's a great quote that you guys have in the piece from one of his former co-workers who talk about how SAM now has this track record of setting up these elaborate guardrails to keep him in check and then skillfully navigating around them. And it made me wonder if you had seen this piece in the information this week about tensions that are being reported between SAM and his chief financial officer, Sarah Fryer. She's reportedly expressed doubts that OpenAI will be ready for an IPO this year. And according to the story, SAM has noticeably and awkwardly excluded her from some conversations related to the company's financial plans, kept her out of some key meetings. I read that and I was like, well, this is exactly what you guys are writing about in your piece. You bring in somebody whose job it is to look over the finances of the entire company, get it ready for an IPO, but then for whatever reason, we're going to exclude her from some meetings. Anyway, I just feel like we really are seeing the exact pattern that you guys are writing about now repeating in real time. Yeah. And I mean, just to agree with all of this, I think the thing that Kevin's bringing up about given the power of this, why are we focusing on one personality? Like, I think that's very legit. I think that this is way beyond one person. This is way beyond one personality. It's not like the point of the piece is, SAM shouldn't be AGI dictator, so Elon should or Demis should or whatever, right? It's to point out the fact that we're having a discussion about AGI dictators at all is insane. These guys know it's insane. And yet this seems to be the race that they see themselves being in. When he was fired, he was brought back in part because I think no one could really imagine an open AI without Sam Altman. Do you think that's still the case? I don't think it's unimaginable anymore. I think that part of reaching the scale that they've reached is that you can have a Steve Jobs figure be replaced by a Tim Cook figure, right? It seems like it's inseparable from reaching this scale that that becomes at least a possibility in people's minds, right, Ron? I mean, does that strike you that way? Absolutely. I think the landscape has changed substantially over the period of time we were reporting this story. The fact that gradually more and more people were talking openly about this critique is very telling. We report in the piece that there are periodic spasms of senior executives at open AI talking about succession again. Of course, naturally, the company denies this, but also very interesting that in recent forms of that discussion, there has been talk about Fiji Simo being sort of the first potential successor candidate who could slot into any ideas of that type that circulate between our asking about that and the piece coming out. Obviously, Simo has now gone on leave for medical reasons. There's a lot of reshuffling. We see it in the Sarah Friar case. I think you're right to link it to that quote that's in the article about constraints being sidelined. Yet, I think these doubts and questions persist and are now much more out in the open. On the leadership question, it just strikes me that for somebody who I assume wants to stay CEO for a long time, it's interesting to me that he's hired so many former public company CEOs to be his top lieutenants. He has the former CEO of Instacart there. He has the former CEO of Nextdoor there. He has the former CEO of Slack there. You're bringing a lot of really sharp and pointy elbows into the room when you do something like that. I'm trying to tell Sam that there's danger here. Pro tip. If you're listening, Sam, there are people in this piece talking about earlier tracks of Sam Altman's career where they feel he was deliberately avoiding that. Actually, part of what underpinned the terrible, terrible thumbling of the firing effort was a feeling that Sam had stacked the board with, as one former member put it, JV people. Certainly, if we're being more charitable than that, people who were unprepared for the ruthless corporate warfare that ensued. I think one thing that is accompanied to the emergence of this as a more openly discussed critique is that there's more people around this company, more stakeholders, wanting professionalizing influences in the mix. I have to ask about one detail that I love to the piece, which is that the first time that Sam Altman and Dario Amade were scheduled to meet, they were going to meet at an Indian restaurant for dinner. This was back in, I guess, 2015. Sam texted him and said that his Uber had gotten in a crash and he was going to be 10 minutes late to dinner. Now, you did not editorialize on that piece. Knowing you both, I'm sure that you went back through the Uber FOIA requests and found the logs of Sam Altman's Uber ride that night. Is it your belief that Sam Altman's Uber actually got in a crash? I think we're just going to leave that as non-editorialized and let it stand right there by itself. I mean, we also, I will say, had this conversation and really liked just presenting that uninflected for consideration. Okay, if you are the Uber driver who was driving Sam Altman to dinner with Dario Amade and you are listening to this show, we do want to hear from you. We do want to hear your side. Hard fork at nytimes.com. We will get to the bottom of this. We will. Well, it's a great piece. People should go read it. Please do not investigate any other AI companies before my book comes out. It was a very stressful week for me. Yeah, why don't you guys take a nice long spring, summer break before you get back? Yeah, look into some politicians or Hollywood executives or something. We'll send you some names. Luckily, it takes us as long to write a piece as it takes you to write a book. Exactly. You'll beat us if we do anything else. There's two of you. It should be faster. Ronan, Andrew, thanks so much for coming. Thanks, guys. Thanks, guys. Your hats are in the mail. When we come back, what our Spanish language friends would call una cosa buena. Did you just Google that? No. You clotted it? Yes. This is a paid message from GoFundMe. My name's Ashley Kane. I'm the daddy of a little girl in heaven and a father to two boys on there. I've got an incredible relationship with GoFundMe, both personally and via our Daughters Foundation, the Xavier Foundation. GoFundMe has allowed me, the foundation and thousands of people out there to give hope to what is in need. You'd actually be surprised how many people out there are willing to show love and support you in your time of need. My advice for anyone that needs to start up a GoFundMe would be do it. You don't need to feel shame, you don't need to feel guilt, you don't need to feel embarrassment. If you need GoFundMe, use GoFundMe. Start your GoFundMe today at gofundme.com. That's gofundme.com. G-O-F-U-N-D-M-E.com. This message reflects one person's experience. Well, Casey, it's been a pretty heavy show today. So we thought we wanted to end on a positive note with our segment called One Good Thing. One Good Thing, of course, our segment where we each talk about one thing that's been tickling our fancy lately. Kevin, why don't you go first this time? Okay, Casey, I am in love with this space mission. Yes. The NASA Artemis II mission, I have been totally and earnestly obsessed. My wife was like, you're sure talking about this space mission a lot. I have been glued to this thing and I have been filled with a childlike glee and wonder that I did not know I still had the capacity to feel. Now, what exactly are they doing on this mission? Orbiting the moon. They are going further than any humans have gone from Earth before. 252,756 miles from Earth. And if you're wondering how many miles is that? Well, the New York Times had a helpful comparison list. And what do they find? You would need a chain of 2.37 billion of Nathan's famous hot dogs to cover the distance that this spacecraft has gone from Earth. That's great. Something we can all easily visualize. Thank you for that comparison. Casey, I am learning things that I never expected to learn. I've been watching this with my kid. I have become completely obsessed with like concepts and terms that I did not know a week ago, including corona structure, the terminator line, which I know you're wondering, that sounds scary. Yeah. It's actually the line that separates the sunlit side of the moon from the side that is dark. Oh, I also learned that we don't call it the dark side of the moon. That's not the preferred astronomical term. What do we call it? The far side of the moon. The far side of the moon. I am obsessed with all of these astronauts. There are four of them up there. Victor, Christina, Jeremy, Reed. This is my Mount Rushmore. I love these people who I've never met. They are adorable. They are incredibly brave. And I think we should go to the moon every single year. I think we should give NASA whatever budget it needs to do because this has reignited my faith in humanity. Absolutely. I also saw somebody on social media was posting that because the mission specialist, Christina Koch, had communicated with Houston's Jenny Gibbons during the mission, this mission actually passed the Bechdel test, which you don't often see on these missions. So I thought that was cool. I also, somebody pointed out, they said, the coolest thing about going on one of these missions, Kevin, would be leaving Florida at 5,000 miles an hour. So that resonated with me as well. Okay. You're more interested in the jokes. I am filled with childlike wonder over here. And I just think this is the coolest thing imaginable. It is very cool. Recently, I had an opportunity to go stargazing. I'm not sure if you've been stargazing recently. I was up on Mauna Kea on the island of Hawaii. Flex. And we had a really cool telescope there with our guide. And I got to stare at the face of the moon. And it inspired a childlike sense of wonder in me as well. But it did not make me want to go there because it looked quite bleak, actually. You wouldn't go to the moon? No, there's no Wi-Fi. Okay. Casey, what is your one good thing this week? Today, Kevin, I want to talk about the only thing that can compete with the moon when it comes to inspiring childlike wonder in a person. And that is a weather app. Okay. I'm listening. So recently, I was reading about these entrepreneurs, Adam Grossman, Josh Reyes, and Dan Bruton. And they are the team behind Acme Weather, which you probably have not heard of yet. But I bet you've heard of Dark Sky. Yes. Dark Sky was by consensus the best weather app on iOS. And while it rained during the 2010s, and I'm using rained in the sort of... The non-meteorological sense. The non-meteorological sense, it would tell you whenever it rained. And now I am using the meteorological sense. Very good app. All right, Pete. Yeah. This app was bought by Apple in 2020, which was kind of a headscratcher. Apple already had a weather app. It was fine. And then Apple sort of integrated some of its forecasts and some of its other features into its weather app and then shut Dark Sky down in 2022. And this made people really sad because I think a lot of us feel myself included. Like the Apple weather app has never lived up to what Dark Sky was in a day day. It's like a prediction mark. It's like, maybe it's going to rain. Exactly. Well, so these guys get back together and they say, Frick it, we're doing weather apps again. And they make Acme weather. And so you can download this now for iOS. It is apparently coming later to Android. And I know what you're thinking, Kevin, which is, what could you possibly build in 2026 in a weather app that could differentiate it from all the other weather apps that are already on the market? Right? Yes. Are you wondering this? I am wondering this. Well, let me tell you a few things. Number one, they don't just tell you the weather. They show you a range of possibilities and a line chart. So most of the time, it'll be like, yeah, it's going to be 63 degrees in San Francisco. But every once in a while, there's a lot of volatility in all the different signals that they use to predict the weather. And then you say, okay, I don't actually know what I'm walking into today. I better bring a couple of layers. This is the weather app for rationalists and other believers in Bayesian statistics. Exactly. Some of the other things that this app does, they will send you a push notification if they think there's going to be lightning in your neighborhood. Okay. They will also do that when they think a sunset is going to be beautiful wherever you happen to be. Wow. They'll send you an umbrella reminder if it's going to precipitate in the next 12 hours, and they'll send you a sunscreen alert when the UV index is high. But I've saved my last two favorites for the end. Number one, they will send you an alert when the Aurora Borealis may be visible where you are. That's beautiful. I haven't gotten that notification yet, but I wake up every day hoping I'm going to get my Aurora Borealis notification. You've got to go to Scandinavia, I think. Number two, and this is just in time for pride, they will tell you when there is a rainbow in your neighborhood. Wow. Are you kidding me? This is such a good idea for a weather app. Who does not want to be sitting at your wage slave job? You haven't been outside in like seven and a half hours, and then Acme Weather tells you, hey, guess what? There's a rainbow in your neighborhood. You're going to book it outdoors, and you are going to behold the majesty of creating heaven. How are they possibly collecting that data? Well, interestingly, they're taking this ways-like approach where they're inviting their community to submit reports. And so if a bunch of people say, hey, rainbow in my neighborhood, they're going to go out and send out a notification. Wow. So now look, this app does cost $25 a year, and I know, probably most people out there are perfectly content with the free weather app on their phone. That is fine for you, but as somebody who loves cool things, new ideas, people having fun, I just wanted to shout out Acme Weather because I think it's a really cool thing. What is the likelihood that this app will be purchased by Apple and then shut down? I mean, if that happens, I hope these guys get paid again because somebody has to move the weather app industry forward, and these are the folks who are doing it. I love that. Like, grandpa, how did you make your fortune? Well, I built 17 weather apps that were identical and then sold them all to Apple. I just also think it's inspiring that at a time when some companies are like, we're going to make a system that is going to force the world to rewrite all software, there are other guys who are like, what if there's a rainbow in my neighborhood? I want to find out about that. And those are the people that I want to highlight on today's show, Kevin. Okay. Well, download Acme Weather before the heat death of the universe renders weather irrelevant. And tell us whether you liked it. That was a good thing. Thank you. Thank you for alerting me to this wonderful rainbow detector. Well, thank you for alerting me to the existence of the moon. I know you weren't a big believer in the moon before, but hopefully I've convinced you today. Well, somebody told me something about a sound stage, and maybe the landing was fake, so I've just been curious. I think we're the only podcasters who actually believe in the moon. Yeah, that's our competitive advantage. Hard fork, where we believe that people have been to the moon. This is a paid message from GoFundMe. My name's Ashley Kane. I'm the daddy of a little girl in heaven and a father to two boys on earth. I've got an incredible relationship with GoFundMe, both personally and via our daughter's foundation, the Isaelia Foundation. GoFundMe has allowed me, the foundation, and thousands of people out there to give hope to what is in need. You'd actually be surprised how many people out there are willing to show love and support you in your time of need. My advice for anyone that needs to start up a GoFundMe would be do it. You don't need to feel shame. You don't need to feel guilt. You don't need to feel embarrassment. If you need GoFundMe, use GoFundMe. GoFundMe.com. This message reflects one person's experience. Before we go, we are saying goodbye this week to our wonderful executive producer, Jen Poyant. Jen has been with the show for years since almost the very beginning, and she's been a critical force in helping us make the show and conceive the show. So Jen is leaving the New York Times for a new adventure, but we wanted to just give her a special shout out and say thank you from the entire Hardfork team for all of the amazing work you've done. It's true, Jen has been a friend and mentor to us both, and we will miss her terribly, but she will always be part of the Hardfork family, which means she has to bring a dish to the potluck. Thanks, Jen. Hardfork is produced by Rachel Cohn and Whitney Jones. We're edited by Viera Pavic. Fact Check by Caitlin Love. Today's show was engineered by Chris Wood. Our executive producer is Jen Poyant. Original music by Marion Lozano, Diane Wong, Rowan Nemisto, Alyssa Moxley, and Dan Powell. Video production by Sawyer Roke, Pat Gunther, Jake Nichol, and Chris Shot. You can watch this full episode on YouTube at youtube.com. Special thanks to Paula Schumann, Quiwing Tam, and Dahlia Haddad. As always, you can email us at hardfork at nytimes.com. Send us your zero-day critical security vulnerabilities. Actually, please don't.