Caveat

Section 702 survives for now.

42 min
Apr 23, 20265 days ago
Listen to Episode
Summary

Section 702 of the FISA Amendments Act received a 10-day emergency reauthorization after a failed House vote on a controversial 5-year extension. The episode also covers Anthropic's release of Mythos, a powerful AI vulnerability detection tool, and explores emerging legal and insurance challenges around AI liability and coverage gaps.

Insights
  • Congressional gridlock on Section 702 reflects deeper partisan divisions on surveillance vs. civil liberties, with majority support for reauthorization but procedural obstacles preventing passage
  • Mythos represents an 'irresistible nuisance' for government—too powerful to ignore for national security, creating leverage for Anthropic to rebuild relationships despite prior Pentagon disputes
  • AI liability frameworks like Tarasoff duty-to-warn are fundamentally misaligned with AI systems' lack of professional licensing, fiduciary relationships, and deterministic threat assessment capabilities
  • Insurance industry is rapidly excluding AI workloads from E&O coverage due to unpredictability, creating a market failure that may require federal backstop similar to flood insurance models
  • The gap between AI's capabilities and legal/regulatory frameworks is widening, forcing ad-hoc solutions (10-day extensions, limited tool rollouts, coverage exclusions) rather than coherent policy
Trends
Surveillance reauthorization becoming increasingly partisan with libertarian Republicans and civil liberties Democrats blocking traditional national security consensusAI safety concerns driving government engagement with AI companies despite prior conflicts, signaling national security prioritization over other policy disputesInsurance industry treating AI as unquantifiable risk, leading to premium increases and explicit exclusions rather than coverage expansionLiability frameworks designed for licensed professionals proving inadequate for general-purpose AI systems, creating legal uncertainty for deployersFederal government considering backstop insurance models for critical but uninsurable technologies, following precedent from flood insuranceAI companies using strategic tool releases and lobbying to rebuild government relationships and influence policy outcomesThreat detection capabilities of AI systems outpacing human ability to patch vulnerabilities, creating asymmetric security risksRegulatory fragmentation across states and federal agencies creating compliance complexity for AI deployers and developersProfessional licensing frameworks emerging as potential regulatory solution for specialized AI applications (therapist chatbots, medical AI)Chilling effects on AI interaction and user privacy becoming secondary concern to national security and liability risk management
Companies
Anthropic
Developed Mythos AI tool for code vulnerability detection; blacklisted by Pentagon, now rebuilding government relatio...
OpenAI
Suspended user account after disturbing ChatGPT interactions without notifying authorities; user later committed mass...
NSA
Using Mythos internally despite DOD supply chain risk ban, highlighting national security prioritization over contrac...
Federal Reserve
Chair Jay Powell participated in banking sector meetings about Mythos AI risks and implications for financial system ...
Better Help
Referenced as example of AI-powered mental health platform that could face Tarasoff liability questions if claiming t...
People
Dave Bittner
Co-host of Caveat podcast covering privacy, surveillance law, and policy
Ben Yellen
Provides legal analysis on Section 702, Tarasoff doctrine, AI liability, and insurance implications
Mike Johnson
Led failed effort to pass 5-year Section 702 reauthorization with disputed warrant requirement language
Susie Wiles
Met with Anthropic leadership to discuss Mythos tool and potential government collaboration
Jay Powell
Participated in banking sector meetings about Mythos AI risks despite tensions with administration
Brian Ballard
Hired by Anthropic in March for $130,000 to represent company interests with Trump administration
Quotes
"I think he was anticipating that people would be too lazy to do the research. And if he put language in there that kind of appeared to be a warrant requirement, enough members would be like, fine, whatever, that's a warrant requirement."
Ben Yellen~8:00
"It's kind of an irresistible nuisance. Right? And I think this Mythos preview model is irresistible."
Dave Bittner~18:00
"Is this ban really a ban if it's being used by our spy agency?"
Ben Yellen~19:00
"The threat is real of revealing bugs faster than we can fix them."
Ben Yellen~25:00
"Federal backstop. It's going to be a federal backstop."
Dave Bittner~42:00
Full Transcript
You're listening to the Cyber Wire Network, powered by N2K. Hello, everyone, and welcome to Caveat, N2K Cyber Wire's privacy surveillance law and policy podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yellen, from the University of Maryland's Center for Cyber Health and Hazard Strategies. Hey there, Ben. Hello, Dave. On today's show, Ben has updates on two major stories, Section 702's reauthorization and Anthropic's rollout of Mythos. I got the latest on AI liability and insurance concerns. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. Quick question. Have you watched Project Hail Mary yet? Humanity is facing an existential threat and racing to solve it with the clock ticking. For security teams, that probably hits close to home with AI use rapidly spreading. Everyone's using AI, marketing, sales, engineering. Chris the intern, without security even knowing about it. That's where Nudge Security comes in. Nudge finds shadow AI apps, integrations, and agents on day one and helps you enforce policy without blocking productivity. Try it free at nudgesecurity.com slash cyberwire. All right, Ben, we've got some good stories to share this week. You want to kick things off for us? Yeah, going to switch things up a little bit and talk about two stories that are recently in the news and two stories for which we have a lot of updates and have some hot takes on both of them. So I thought we'd start with that. So first, we have Section 702 of the FISA Amendments Act. We've talked about that for the last couple of weeks. It was due to expire last week as we're recording this. There's just a big battle in Congress about whether to reauthorize it in the first place. And if so, should there be reforms like a warrant requirement to query the Section 702 database? Right. So President Trump is pushing to have FISA extended for 18 months unchanged, the so-called clean extension of the FISA Amendments Act. And Speaker Mike Johnson, his role was really to shepherd this through the House of Representatives, which was a very difficult task. You have a lot of opposition among libertarian-minded Republicans and then, of course, among similarly thinking civil liberties Democrats. You also have support among a good portion of the Republican conference and some national security-minded Democrats. It did not go well for Mike Johnson last week. So I think they were kind of negotiating amongst themselves in the Republican conference about how to move forward. and FISA was on the agenda for the floor. I kind of kept checking all day to see when the vote was going to happen. It kept being postponed. And then in the middle of the night, there was an announcement that they had agreed on a deal. Oh, in the middle of the night. Always the best time to make important decisions. Some quality legislating. That's right. So instead of extending FISA for 18 months, as the original bill had proposed to do, this would have extended it for five years, which is quite a difference. And then Johnson insisted that there was a warrant requirement in this bill, which kind of shocked me. I was like, wow, they're really going to do this, the warrant requirement. But I had two false assumptions when I said that. The first was that they had actually come to an agreement. Okay. And the second was that it was actually a warrant requirement because neither of those things were true. You had assumed that Mike Johnson was saying what was true. Yeah. So when I dug into the warrant requirement language, what honestly I think happened is he was anticipating that people would be too lazy to do the research. And if he put language in there that kind of appeared to be a warrant requirement, enough members would be like, fine, whatever, that's a warrant requirement. Check that box. Let's pass it. That rascal. Because what it really did, it just restated what the current law is. Okay. It says, like, for U.S. persons for surveillance, you need to get a warrant from the FISA court under traditional FISA search, which, yeah, that is definitely the case. And then it had similar provisions about, you know, if it's a non-U.S. persons overseas, no warrant is required under Section 702, which, yes, is the current law. So the word warrant appeared. Appeared several times in the warrant language. What I was looking for is a warrant for querying the database. That was not in there. And members of Congress in both parties quickly realized that this was not actually a warrant requirement, and therefore it wasn't in there. So this was not going to go over well. And that led to some fun procedural mumbo-jumbo. And I'll try and be really quick on this because I know it bores people. But basically, to consider a bill in the House of Representatives, first you have to pass a rule governing debate. These are always party-line votes. You know, the minority party is always like, we want amendments, we want three hours of debate, and the majority party is always like, no, we got to get this done quickly, no amendments. And the majority party always wins, obviously. Okay. The problem is when you have enough members of the majority party who are angry about the process, they are going to tank the vote on proceeding to the bill, the vote on the rules governing the bill. I see. And you'd think, well, maybe there are enough Democrats who support FISA reauthorization. They might bail Republicans out. It's kind of an unwritten rule among the minority party in Congress that you do not, under any circumstances, help the majority party pass a rule vote. Oh. So Democrats are going to sit out that vote and make Republicans come to some of their own consensus. So Johnson put that rule on the floor. It was debated for an hour while I was sleeping, and presumably most of our listeners were sleeping, unless you're in different parts of the world. Hopefully none of our listeners are sleeping right now. Yeah, they might be. They might be. And that rule, the vote on the rule, ended up failing spectacularly. Okay. And so they went back to the drawing board. and instead of a five-year reauthorization with a fake warrant requirement, we got a 10-day reauthorization. So that would be shorter. That is significantly shorter. 10 whole days. As I said on the Cyber Wire podcast, I said 10 whole days. 10 whole days. Yeah. Yeah. 10 days was an interesting choice. I mean, I think he did it because he could do that through unanimous consent. I don't think any members were going to object to just 10 more days of trying to work out a deal on this. And now the Senate, I think, is going to take the lead in coming up with FISA reauthorization. I still believe in my heart of hearts that there's majority support in both the House and Senate to reauthorize Section 702 with minimal changes. There are loud voices demanding warrant requirements, and they make up a significant portion of each party's voting base. but when push comes to shove, if we were to figure out a procedure where it's just an up or down vote on a reauthorization bill, I think it would pass. They just have to kind of figure out what that procedure is going to be. And I think it's probably going to be, it goes through the Senate, they allow a bunch of amendments, all the amendments are subject to a 60 vote threshold, they all fail, and then 60 senators come together to advance the reauthorization of FISA with minimal reforms, it would go back to the House. And the hope among senators would be that the House would be jammed. They'd be like, you accept our FISA reauthorization bill or you get nothing. And that's worked in the past. It doesn't always work. It didn't work with Homeland Security funding, for example. But I think that is the play. So long story short, FISA Section 702, as we speak, is authorized for a few more days. I know as we've talked about, the FISA court reauthorized the Section 702 program through early next year. They're kind of these parallel processes where there's the statute. The statute says that the FISA court has to authorize the program, and the FISA court has authorized the program. So what I think many members of Congress are saying is there's no urgency here. Let's get this right. We know that the FISA court has approved Section 702 as it existed through the beginning of next year, so that should abate any safety concerns or national security concerns, and let's come together and try and come up with some sort of compromise. What the response would be to that, and I think this is what Speaker Johnson and other proponents of Section 702 would say, is there's a lot of uncertainty. Companies might think twice about complying with Section 702 directives if they are unsure about the legality of 702 searches. Right. If it lapses, the companies can say, we don't feel like we have the legal cover to provide you with this. Right. And then that hurts both the legitimacy of the program and the efficacy of the program So I am kind of just observing from afar and seeing where this goes over the next week It appears that the Senate is going to take this up They also have to take up Homeland Security funding and funding, additional funding for ICE and Customs and Border Patrol this week. So that's a crowded agenda. I don't know how it's going to turn out, but certainly was a bit of a debacle on the House side. And, you know, I think the best prospects for reauthorization would be a short 18-month at most reauthorization with minimal reforms if there's not a warrant requirement. Okay. What else you got? So, yeah, there's my uptown 702. I wanted to talk a little bit about what's going on with Anthropic. So, when we left off talking about them, They had been blacklisted by the administration. When we last left our AI company. Exactly. Supply chain risk, blah, blah, blah. Pentagon's angry. They can't even work with any government contractors. Something has happened since then, which is the release of Mythos. I am an avid listener of the Daily Podcast, so I'm going to just go ahead and assume that you've talked about Mythos a little bit already. Yes. Because, of course, I listen every day. Which everybody should, by the way. As do I. Yeah. So Mythos is the new tool developed by Anthropic that's supposed to be a game changer for cybersecurity. Basically, from what Anthropic has advertised, it has powerful and unprecedented abilities to find security weaknesses in computer code, which sounds awesome until you start to think about what happens if this gets into the hands of our adversaries. So the rollout of this has been extremely careful. I think Anthropic has only released it on a limited basis to what they see as responsible parties. Yeah, the usual suspects. Exactly. And you'd think generally that might include the U.S. federal government, but they have this kind of ongoing battle. So there was a meeting at the White House at the end of last week where the leadership from Anthropic met with very powerful, not the president himself, but very powerful members of the administration. So the chief of staff, Susie Wiles, and some other high-level Trump officials. There wasn't any major breakthrough. They are still conflicted over what happened at the Pentagon when they had a contract and Anthropoc set some boundaries and the Pentagon wasn't happy about it. but I think they wanted to have this productive dialogue, and it's a recognition of the power of mythos, that it could be a major benefit to the federal government if we were to leverage it and use it to help identify vulnerabilities in our systems and to be a resource to the private sector, but also just the risk of now that the tool exists, it's getting out there and can be used in an offensive way against our companies and perhaps our government. So really, it was just a conversation about how the White House and Anthropic can collaborate. But I think from Anthropic's perspective, it's kind of a way back into the good graces of the government, which they think they need. Yeah. Well, as you and I are recording this, in fact, we just reported on today's CyberWire that A report from Axios says that the NSA is indeed using Mythos internally, despite the DOD's labeling of supply chain risk. Yeah. One of my colleagues used to refer to certain things as irresistible nuisance. Interesting. Right? And I think this Mythos preview model is irresistible. and I think I would imagine the NSA could make the case that it would be irresponsible for them not to be taking a look at this given its potential possibilities. So it's interesting how much weight does the ban hold when national security issues are at risk. Right, is this ban really a ban if it's being used by our spy agency? Right, yeah. And then like, if we're letting the NSA use it, does it make sense that random contractors with, you know, the Environmental Protection Agency aren't allowed to use any of Anthropics tools? It just, it's kind of incongruent. But I think it's just circumstances have brought us here because of the potential of Mythos. From what I've read, and you know more about this than me, like, they think it's going to be really good about exposing security flaws in smaller systems and smaller companies. but like the real kind of advanced stuff, we're still a little unsure about. Is that your general impression of it? Yeah, I mean, some of the things they pointed out, there was a researcher who was one of the first people to kind of raise the red flag about this. I think he found a bug that had been in Linux for three decades by sicking this AI on the code, the source code. So I think the main concern is that these AI models are able to find these vulnerabilities faster than people can patch them or come up with the solutions to fix them. And so that's the concern. Now, in all of these conversations, there are those high researchers who have a lot of respect, have put out the view that all this hype about Mythos is just marketing. Right. That it's not really that big a deal and everything that Mythos is claiming to do, they can do in previous models if they just ask the right questions and do the right things. So it's hard to say what is actually going on here. I think you and I talked about how high-level people in the administration summoned the heads of several banks about this. Because there's a lot of concern in the financial sector. I know the head of the Federal Reserve, Jay Powell, who is not exactly on good terms with the rest of the administration. I know he's spoken about this. Treasury Secretary Besant was there in a meeting as well. So I think there's certainly sufficient concern there. So they're taking it seriously. And I think regardless of whether mythos is everything that they claim it is, the threat is real of revealing bugs faster than we can fix them. And so that's kind of where we are right now. And like you said, if and when that gets in the hands of our adversaries, if it isn't already, that's a problem. That's a problem. Can I posit an insane conspiracy theory that I don't actually believe in? Sure. What if, and that's how I'm going to catch this, but what if Anthropic developed, hyped up, and released this tool as an implicit threat to the federal government so that it could get back on the inside? Hmm. Hmm. Okay. I have no support for this whatsoever. no inside knowledge. So in other words, I guess the conspiracy theory would be that, again, the marketing push is such that, again, an irresistible nuisance that the government can't say no to this because it's too powerful and this is how Anthropic gets back in. Or they'd be like, oh, this is such a powerful tool. Would hate to see it get into the hands of our adversaries. Yeah, I don't see that happening. I know, I know, I know. I don't think they're, I'm not placing any traitorous behavior on Anthropics. Yeah, I think the more responsible version of this, again, insane conspiracy theory that I don't actually believe in is just like, let's get it out there that this is critically important and that like the government has to get involved in this in some way. And we can use that as an in to rebuild our good graces with the administration. I mean, it could also be that Anthropic was sort of gracefully giving the government the opportunity to re-engage because I don't think it's unreasonable to say that the ban on Anthropic was kind of part of a snit. Yes. You know, and so not great to have this sort of cutting edge technology that can be applied to national security be turned on or turned off based on a SNIT. Mm-hmm. So maybe this is Xanthropic trying to allow the government to say, oh, well, we can't, I don't know. And the kicker is, of all of this, is that disclosures filed this week showed that Anthropic spent $130,000 in March to hire Brian Ballard, who was a lobbyist, who has close ties with the president's team. Okay. Not conspiratorial at all. I'm just saying it's probably actually responsible on Anthropic's part. Sure. Just because this matters so much to their business. And you know if you going to create a tool that this powerful I completely think they earnest about wanting to have some public collaboration on this I agree And like, you know, I'll just reiterate that I think taking one of the leaders in AI off the table for the federal government, that's a choice. Yeah. Right? So I don't know. It's so hard to, I guess so many things that we're dealing with these days aren't rational in the way that we would like them to be. And so we have to come up with alternative, increasingly absurd alternative explanations, which... Yeah. Yeah, maybe... Or just ways around workarounds. Exactly. Yeah. I will say, like, in all seriousness, it is good that they're collaborating and that there is an open line of communication because, yeah, I think Mythos, if it lives up to its hype, could be a tool used for good and evil. So I think it's important that we have this type of collaboration. Yeah, we'll see. All right, we will have links to both of those stories in our show notes. I tell you what, let's take a quick break here, and we will be right back after these messages. Maybe that's an urgent message from your CEO, or maybe it's a deepfake trying to target your business. Doppel is the AI native social engineering defense platform fighting back against impersonation and manipulation. As attackers use AI to make their tactics more sophisticated, Doppel uses it to fight back, from automatically dismantling cross-channel attacks to building team resilience and more. Doppel, outpacing what's next in social engineering. Learn more at doppel.com. That's D-O-P-P-E-L dot com. And we are back. I have links to a couple of stories here that are kind of related. This first one is a little more wonky, so I'm hoping that this is something that you can provide some insight on. This is from the folks over at Lawfare, and it's Tarasoft meets the AI age. And I guess I should ask you, Professor Yellen, Yes, this is my professor voice. That's right. What do we need to know about the history of Tarasov? What was it versus the University of California? The Regents of the University of California. Right. So it is a precedential case from the 1970s related to the duty of therapists to warn about potential dangers that they see in their patients. So obviously in the 1970s case, This had nothing to do with artificial intelligence. It was just a duty imposed on mental health professionals under certain conditions that they have some type of fiduciary obligation to notify law enforcement. That's kind of the basis of the doctrine. It's about a duty to warn or protect even at the expense of the other major value in mental health assistance, which is confidentiality. Right. Right. So that's why I think your first therapist appointment, sometimes they'll say to you, this is like a Tarasoft warning, kind of the version of a Miranda warning. Here are the things that if you say those, I am obligated under the law to report to law enforcement. Right, right. So this case, winding forward to the modern age, OpenAI had a user named Jesse Von Roostelar from British Columbia. OpenAI suspended her chat GPT account back in June of last year, but they did not notify authorities about what they labeled disturbing interactions with chat GPT. But a few months later, this woman carried out a mass shooting and she killed nine people, including herself. So the question, what is that question here is, to what degree is an AI company liable or responsible, obligated to warn law enforcement when somebody is interacting with their large language model in a disturbing way? Yeah, so this actually gets quite complicated. I mean, for one, Tarasov is not a universally well-regarded decision. Oh, really? It hasn't been equally applied across jurisdictions historically. And then there's just some other obstacles that this Lawfare article brings up that I think are absolutely critical. So for one, it's really hard in the AI context to determine what constitutes a legally actionable or credible threat. It's just much more nebulous. It's hard to know what are the specific signals that indicate to the AI that something is problematic or worthy of that duty to warn. Because sometimes signals, as I say in this article, can be automated, ambiguous, probabilistic. It's not like the one-to-one direct conversation that somebody has with a mental health professional. The other is so much of Tarasoff is about professional judgment. So mental health professionals are uniquely situated to talk to law enforcement because they've had patients, they've been through school, they know how to recognize potentially dangerous behavior. But AI firms, developers, or deployers for that matter, they are not fiduciaries. They don't have any professional relationship with the user. And then how do you even ascribe accountability? So is it the developer of the model? Is it the company that's deploying it? Is it human reviewers? Is it the trust and safety team? It's much easier in the normal Tarasoft context to figure out who is responsible. That becomes much more complicated here. Then something that kind of shares a similarity with traditional Tarasoft cases is if there is that duty to warn, could that incentivize broader monitoring or surveillance? where we become so obsessed with trying to suss out suspicious behavior that we trample on some of the other values that we want to have in our AI interactions, like confidentiality, like privacy? And will there be a chilling effect where people are unlikely to communicate with a chatbot if they think that what they're saying is going to be reported to law enforcement? Now, again, like there are competing values there. Do we even want somebody to be thinking like that as if this were having a chilling effect on a real conversation and not with a chatbot? I don't know. I think that's a value that we still struggle with. And then standard of care. So there's really no guidance in positive law or statutory law about what standard of care should it be negligence, which would be really difficult to prove. Like you'd have to go through discovery and look through the development process of the AI tool and figure out when the developer was negligent and exactly what that negligence was. Or is it something like strict liability where because you're putting a product onto market, there's an expectation that if that product is defective in some way and leads to really bad outcomes, then that developer is going to be responsible. So these are all unanswered questions. I think we're going to hopefully get some resolution either through legislators, probably state legislatures, or through our court system to figure out how does Tarasoff apply in this context, if at all. So I have some questions. Yeah, yeah, yeah. So Tarasoff, does Tarasoff only apply to counselors, to mental health professionals or medical professionals? Yes, it only applies to licensed professionals. Okay. So it's a duty that basically comes with your license because it's about your own professional trained opinion on whether what the person is telling you is dangerous. So if I were having a conversation with my plumber... He has no obligation to report. Right. But I was thinking like even like a life coach, probably not. I don't think so. I mean, because it's about the professional experience, which can be proved, which the evidence of that can be in the license, right? Right. So I think unlicensed life coach is unlikely to have that obligation under Tarasov. So going to the AIs then, what I wonder is a general purpose AI, the chat GPT of the world, is one thing, general purpose. But what if I have an AI therapist? If I'm selling an AI therapist, you know, like the Better Helps of the world, because those are out there. Is it more likely that they would be obligated or more likely that someone could have a case against them that would stick because they claiming mental health care I think it's possible. I still think we're a long way from that because of the nature of Tarasoff itself. So basically there are four factors that are considered. It's licensing, which we've talked about. That's not going to apply until we start licensing. Who the hell knows? I mean, maybe we do start licensing these therapist-specific chatbots. Specialized clinical expertise. I think that's a little bit more under question. We can get back to that. A, therapist-patient relationship, and then foreseeability grounded in professional judgment. I think with those last three, you could make a strained argument that that would apply to an AI chatbot, right? They develop expertise because they've been trained on presumably millions of interactions. A therapist really could have a certain type of relationship with that patient. And there would be a degree of foreseeability, especially like if we're talking about an LLM, they run on foreseeability, right? I mean, they're predictive models. So I think you could make an argument. Where this falls short, I think, is I don't think our legal system is ready to recognize a chatbot as having actual clinical expertise. I think it can fake having expertise, but it's trained on material written by people who do have specialized expertise. And then obviously the licensing thing is going to be a huge burden until we get to a point where there are licensed chatbot therapists, which maybe is closer than it appears at the moment. So yeah, so the chatbots are therapy enthusiasts, not professionals, right? Right. And I think, you know, they should be considered as such. And I think states could take the lead in some type of duty to warn, to both have static and dynamic warnings. Right. This therapy is for entertainment purposes only. Right. It's like, this is not a licensed therapist. You know, it could be a version of the disclaimer we give on our show. Like, if you want to talk to an actual therapist, call an actual therapist. Right. I think there are ways to remind people that this is an artificial intelligence tool. It has limited use. It should not stand in place of an actual human being licensed therapist, even if it's undergone things like human review. Because we know that AI can go off in all different types of unexpected directions. If there's some slight flaw in the algorithm that hasn't yet been discovered, things could really go awry. Yeah, well, and they have. Yeah. Yeah. All right. Well, just related to this, I just have a link to another story that I think aligns here. And this is from the folks over at CSO Online. And they were chronicling how a lot of insurance companies are starting to exempt AI workloads from errors and omissions coverage, saying that the outputs are too unpredictable to write policies around. So your AI, the stuff that you're using your AIs for may not be insured. And so that's important. Yeah, I mean, from an actuarial perspective, it makes sense. Like, if you don't trust the AI system, you don't want to be liable for risks arising from that system. So if I'm an insurance adjuster and I see that somebody's house is in a floodplain and that their flood management plan is to hope and pray that a hurricane never comes. Right, in a bucket. Yeah, exactly. Yeah, here's a tiny bucket. Let's hope this works. Right. That is obviously going to affect the type of coverage that I'm willing to supply. And I think the same thing is true here. If AI outputs are unreliable, then I think the insurance policies have to reflect that in one way or another. So there are a couple of ways that companies can do this. One is just raising premiums generally as a recognition that companies are using these tools and they pose a greater risk. And with greater risk is a greater potential cost to the company. So they got to raise premiums. Or adding explicit exclusions for AI-related stuff. Yeah. So disruptions, data breaches, bad decisions produced by agents. So this is a very new development. Like, I don't think we know exactly how this is going to play out, but it's certainly very interesting. Yeah. I'm telling you, Ben, federal backstop. It's going to be a federal backstop. I know. I know. I mean, this is the type of thing where we really could use a federal policy. Yeah. One of your hobby horses for a long time is how the flood insurance model should be a model for cybersecurity. How when we find something that's completely uninsurable, the government has a duty to get involved and insure it in its own way. This is that. This is that. So your hobby horse is still very relevant. Yeah, I mean, look, I get it from the, because here's where we find ourselves, right? This is the cutting edge. The economy's, some argue the economy is being kept afloat by the spending on these things. So from the federal government's point of view, they don't want to poo-poo this. They don't want to do anything that's going to slow it down. And if you can't get insurance for using AI, and at the same time, you have to use AI to be competitive, where do we find ourselves? You know who uses AI, by the way? Insurance companies. And how can they not? I mean, talk about a force multiplier. Like I think the insurance industry, and I'm not a professional, but I think that would be a great use case for artificial intelligence. Like the assessment of risk is something that I think, at least obviously there needs to be a level of human review. But to me, that seems like something that AI would be very useful for. You have this house or this piece of tangible property. Here's where it's located. Here's all the risk assessments we've done in the past. Like train those and have AI spit out a number. Yeah, well, I hear they're already doing surveys where they use satellite photography to examine your roof to decide whether or not they want to renew your homeowner's policy or insist that you get a new roof before they re-up you. That's why I'm putting a tarp on my roof. I don't want anybody looking at the quality of my roof. Right. It'd be camouflage tarp. It'll be like one of those government places that gets blurred on Google Maps. Yeah, exactly. And then you start to get even more suspicious. It's like, why is that building blurred? Right, right. Yeah. But yeah, I mean, it's a risk that insurance companies have to reckon with like any other risk. The thing is, I think a lot of firms are unaware of this development. So they might not know that they are exposing themselves to either increased premiums or coverage gaps by using AI tools. And so this is something I think that is a new development and it's going to need to be publicized. And it's something that I think companies need to be paying attention to. Yeah, and I think a lot of folks could end up finding out the hard way. Absolutely. Yeah, something bad happens. you assume you're covered. And then there was a tiny little, when you were renewing your policy and just signed the DocuSign. Yeah. And didn't read it carefully. There was a new line in there about AI tools. And yeah, I think that's going to be a rude awakening. Tom Waits said, the large print giveth and the small print taketh away. Now, can you say that in Tom Waits' voice? The large print giveth and the small print giveth away. That was pretty good. Thank you very much. That was pretty good. It was half permit, half came in with a really bad cold. But I dig it. I dig it. I've seen people, you can dub Tom Waits' music over footage of Cookie Monster, and it just matches up perfectly. Oh, that's brilliant. Yes. Go look for it on YouTube. That's incredible. Yeah, it works. All right. We will have links to both of those stories in our show notes. And again, we would love to hear from you. If there's something you'd like us to consider for the show, please do email us. It's caveat at n2k.com. And that is our show brought to you by N2K CyberWire. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to caveat at n2k.com. This episode is produced by Liz Stokes. Our executive producer is Jennifer Iben. The show is mixed by Trey Hester. Peter Kilpie is our publisher. I'm Dave Bittner. And I'm Ben Yellen. Thanks for listening. Thank you.