Cybersecurity Headlines

The Department of Know: Vercel breach, a "Contagious Interview," and ghost breaches

40 min
Apr 24, 20264 days ago
Listen to Episode
Summary

This episode of Cybersecurity Headlines examines critical security incidents including the Vercel breach via compromised AI tools, self-propagating supply chain attacks targeting developers, and the emerging threat of AI-generated "ghost breaches." Hosts discuss how OAuth token sprawl, identity management failures, and supply chain vulnerabilities are creating exponential attack surfaces that traditional security controls cannot adequately address.

Insights
  • OAuth token sprawl has become the new lateral movement vector, with AI tool adoption outpacing governance capabilities and default "allow all" permissions creating systemic risk across organizations
  • Supply chain attacks are evolving from surgical precision strikes to wormable, self-propagating campaigns that scale exponentially through developer workflows and open source repositories
  • AI-generated ghost breaches represent a new attack surface combining reputational damage, regulatory scrutiny, and financial manipulation through prediction markets without requiring technical exploitation
  • Recovery and resilience are critically under-invested compared to prevention, with organizations unable to distinguish between recovered and restored systems until post-incident
  • Full-stack visibility from application to OS-level artifacts is essential, as strong encryption can be undermined by endpoint data persistence and unintended system artifacts
Trends
Shift from linear to exponential attack models across all threat categories (supply chain, OAuth, ransomware)AI tool adoption creating governance gaps faster than security teams can implement controlsOpen source ecosystem under stress from multiple simultaneous attack vectors targeting maintainers, contributors, and build pipelinesInsurance market signaling AI risk remains poorly understood and unpriced, creating coverage gaps for emerging threatsIdentity and access management becoming the critical control point across cloud, AI, and supply chain attack surfacesRansomware long-tail impacts (operational degradation, manual workarounds, delayed recovery) causing greater damage than initial breachPrediction markets and financial incentives creating new monetization paths for threat actors beyond traditional extortionGhost breaches and AI-generated false incidents requiring incident response playbook updates and faster validation protocolsThird-party and fourth-party risk cascading through vendor ecosystems faster than visibility and governance tools can trackEndpoint artifacts and notification system persistence undermining application-level encryption and data protection assumptions
Companies
Vercel
Web infrastructure provider that disclosed breach via compromised Contacts.ai browser extension and OAuth token abuse
Anthropic
AI company whose Claude/Mythos model was accessed by unauthorized Discord group through third-party contractor creden...
Contacts.ai
Third-party AI tool compromised by credential-stealing malware, used to pivot into Vercel's systems via OAuth access
Apple
Released iOS security update to fix notification system flaw that stored message previews after deletion, exploited b...
NHS (National Health Service)
London hospitals still recovering from June 2024 Quillen ransomware attack with degraded systems and delayed patient ...
QBE Insurance
Insurer moving to cap payouts for AI-related cyber incidents with sublimits on LLM jacking and emerging AI threats
Beazley
Insurance carrier implementing sublimits on AI-related cyber incident coverage alongside other carriers
Google Workspace
Platform where OAuth sprawl and app permission management creates governance challenges for enterprise security teams
Signal
End-to-end encrypted messaging app mentioned as having solid encryption but vulnerable to endpoint notification artif...
Next.js
Open source project by Vercel that was unaffected by the breach despite customer environment variable exposure
Trend Micro
Security research firm that identified self-propagating supply chain attack using fake job offers and malicious VS Co...
Mandiant
Incident response firm assisting Vercel with investigation of breach and stolen customer data
New York State Gaming Commission
Organization where guest Mike Bickford served as former CISO
American Century Investments
Investment firm where guest Brett Conlin serves as CISO
People
Rich Trafalino
Host of Department of Know segment discussing cybersecurity headlines and industry trends
Mike Bickford
Guest discussing IAM programs, supply chain risk, ransomware recovery, and endpoint visibility strategies
Brett Conlin
Guest discussing OAuth sprawl governance, identity control, supply chain attacks, and incident response planning
David Spark
Executive producer and contributor to chat discussions on ghost breaches and prediction market risks
Josh
Producer managing show segments and technical operations
Quotes
"OAuth tokens are the new lateral movement. The breach didn't have an exploit. There was not a zero day to it. It wasn't phishing."
Brett ConlinVercel breach discussion
"You can't govern what you don't see and what you don't have visibility to. So that has to be a principle in governing and understanding."
Mike BickfordOAuth sprawl governance
"Supply chain attacks are evolving from a surgical strike on a particular target to really becoming scalable weapons. That's a shift from precision to propagation."
Mike BickfordContagious interview scams discussion
"Recovered and restored are different words. And most organizations can't tell the difference until they've lived through it."
Brett ConlinNHS ransomware recovery
"Everything is growing exponentially. We are no longer looking at linear attacks."
Brett ConlinSupply chain attack analysis
Full Transcript
This is Rich Trafalino with the Department of Know. Mike Bickford, former CISO over at the New York State Gaming Commission. I got to ask, one, thank you for being back on the show so soon. But two, what is your priority this week? Planting the garden this weekend, but also getting immersed in some new activities at work. Ooh, maybe some seeds of knowledge, perhaps, will be blooming as well. And Brett Conlin, CISO over at American Century Investment. What is your priority this week? I think on the personal front, we've got some VIPs coming to our house. So we're getting ready for that. And then on the work front, I would say that IAM program, we are looking at that closely. So we're looking at, you know, what we have to do to fill that gap. Identities are now logging in thousands of times an hour across vendors we've probably never heard of. So that's where our focus is going this week and this quarter. fantastic that is i i i love to hear where people's mindsets are when it when it comes to everything you know we're getting hit with so many new things every single day it seems like so kind of getting a handle on that i absolutely love to hear it hey and anybody watching live let us know in the chat what is your priority this week i'd love to see where your mindset is where your mindset is at love to see it all right producer josh let's run that opening get into the show. From the CISO series, it's Department of Know. Welcome to the Department of Know, your virtual Friday strategy meeting. Help them close out the week, have some fun, and figure out how we can integrate all this news. We get all this crazy cybersecurity news into your week, into your departments, figure out what makes sense here. A huge thanks to our sponsor for today, Threat Locker, for helping make the show possible. We'll hear more about them later in the show. Remember, if you want to get involved, first up, best way to do it is live in our YouTube chat. We broadcast every Friday at 4 p.m. Eastern if you're not joining us right now, so make sure you join us one week or every week, preferably, or you can email us feedback at CISOseries.com. We'll be getting more messages there. We do read each one of those. We do appreciate it, so thank you so much. Before we jump into the news, just a quick disclaimer that all the opinions of our guests are, in fact, their own, not necessarily those of any employer, And so do with that what you will. We've got about 30 minutes, though. Let's get into the news. Starting off with our no or no segment. This is where there's so much news out there. We've already alluded to this. We need to know, is this something we need to be bringing to our security teams? We need to be bringing up at work, figuring out what this means for our organization or interesting headline, but don't need to go any further. First up here, just want to raise some eyebrows for me. Insurers move to cap payouts around AI. The Financial Times reports insurers, including QBE Insurance and Beasley, are moving to cap payouts for AI-related cyber incidents, introducing sublimits that significantly restrict coverage for risks like LLM jacking, where attackers exploit enterprise AI systems to avoid usage fees. Reminds me of early AWS auth jacking, that kind of stuff. Brokers and legal experts warn that changes could narrow protection across a broader range of emerging AI threats, even as insurers argue that they are clarifying coverage rather than reducing this. All right, Brett, I'm going to start with you. insurers capping AI-related losses. Do you want to know more about this, or is this just insurers being insurers? Insurers being insurers. Every time there's a new risk category emerges, whether it's cloud, IoT, ransomware, same cycle, right? Initial broad coverage, claims start coming in, carriers start the sublimits, market adjusts, and there's nothing here that's AI-specific. It's all in draft form. If your team is threat modeling on your cyber policy paying full price, you've got bigger problems than that. Mike, what about you? Is this something you want to know more about or are you kind of in the same wavelength with Brett here? I definitely want to know about it. I want to inform, you know, the rest of the leadership as well because this, I take a different stance. It's not just the insurance, you know, the insurers being insurance insurers. It's the market is signaling here that, you know, AI risk is still poorly understood and hard to price. It really is. You know, what we're seeing is just your traditional maturity gap. AI-driven risk, especially things like LLM abuse or LLM jacking, doesn't even have, it doesn't have yet the decades of actuarial data that goes behind like car crashes or housing insurance or natural disasters. So the insurers are protecting themselves with supplements, just like, you know, Brett alluded to, but as a CISO, I want to know more. I want to know not because of the policy paychecks and mechanics and all that stuff, you know, but it tells me, you know, where coverage gaps still exist. Yeah. And that's, that's one of those things. I mean, we've talked about this, you know, with ransomware, right. For years now, right. Where this is, it still felt so nascent. And so So from an insurance perspective, right, Mike, completely to your point, without those decades of experience and seeing the scale of attacks kind of rapidly changing what we thought was rapid, it turns out now when we talk about rapid with AI, it's literally feels like hour to hour sometimes, right, with the rate of change that we're seeing here. So, yeah, this is one of those things where I'm like, it feels more incumbent to have that ongoing discussion of what the expectation is or what their understanding of this is. for your organization. But Brett, I think your point in terms of, you know, kind of depending on that, absolutely spot on. That is what I would bring to my security team, right? Is that kind of realization, I think, is the important one here. All right, next up here, unauthorized mythos access. Bloomberg reports a small group of unauthorized users claimed in a private discord they were able to access Anthropik's mythos model. One member of the group works for a third-party contractor for Anthropic. The group combined intelligence from a previous supplier breach to guess Mythos' online location, and then the contractor was able to use their access credentials to actually get in and test it. Anthropic investigated the report and said there's no evidence that access went beyond a third-party vendor's environment. They were trying to keep on the down low, not doing anything like, I don't know, scan Google for vulnerabilities. But, Mike, I got to ask you, if a random Discord group got access, odds are the genie's already out of the bottle somewhere else. Do you want to know more about this or is this, you're good with this story? What are your thoughts? Definitely want to know more. If a Discord group can, you know, even claim access, the real issue isn't the access. It's control over the supply chain, right? I want to know more about how is that happening with the convergence of identity, vendor risk, and telemetry gaps that we have. Even if the access didn't extend beyond, you know, vendor environment, that pattern obviously still exists and still matters. So this is identity control and third-party risk and weak boundaries. That's what we need to control. Yeah. And Kevin Farrell in our chat asking, is this a permissions misconfiguration? I mean, if it's not, congratulations on whoever was now doing the audit on all contractor access for Anthropik here. But Brett, I'm curious, how did the story strike you? Do you want to bring this to your team, dig in more on this? I think I'm bringing it to my team for both reasons, actually. So first, I want them to know it's not for the reason the headline suggests. So the headline was there was a hack on Mythos, but it really wasn't a hack, right? This is OSINT. This is vendor sprawl. This is what we talk about all the time. A compromised third party now got access. And they guessed it. They guessed sort of weird. the model was hosted from that breach. The other thing, what I'm telling that team is, okay, they got non-production access. It was from a vendor. We're not part of Project Glasswing. There's no evidence that it went beyond the vendor. They're saying it's not being used for attacks. Nothing was exfiltrated. Nothing was weaponized. I want our focus on what the mythos environment is going to be bringing down the road, not necessarily the actual story itself, other than the ability to distinguish the true story, which is this is identity compromise again. And what are we doing about that? And now you're at a fourth party. It's just parties all the way down, the least fun variety of parties all the way down. Next up here, London hospitals continue to suffer from 2024 ransomware attack. This ransomware attack occurred in June, 2024 by the Quillen Ransomware Group, and it continues to reverberate in the system. Internal documents show at least one NHS trust is still working without fully restored systems and managing a large backlog of delayed test results, restricted blood supplies, and theft and publication of sensitive patient data, as well as delaying treatments for highly time-sensitive conditions, including cancer. Critical results are being communicated by phone. We're having full reports being delivered on paper or PDFs and manually uploaded into patient records. Brett, a recent study by King's College of London described Bransomware as the most significant current cyber threat to the NHS. I'm curious, do you want to know more about this particular incident, kind of the organizational flow of this? Or is the long tail of ransomware unfortunate, but kind of established fact at this point? I definitely want the team to know about it. And I'd push back on the framing, right? So the long tail of the ransomware is being established. That's typically why we stop paying attention. But established doesn't necessarily mean that we've looked at the whole story there. So to me, recovered and restored are different words. And most organizations can't tell the difference until they've lived through it. So if you're looking at your risk register and your third-party risk register, it's probably going to list some of those areas and they're all going to have vendors in a different row versus what happens when one of them is this vendor and what happens at month 18 now when something like that is going on So I think it good for them to have context around it And again the recovered versus restored is extremely important for companies to take note of Mike, what about for you? How did this story strike you? Are you bringing it to the team? Yes, absolutely. You know, the breach here is the event, right? The real risk happens with the operational trail or tail that follows this. You know, this is absolutely something leaders should be paying attention to The long tail of ransomware is where the, you know, that's where the real damage is occurring. You know, delayed care, manual workarounds, degraded trust, you know, in health care, especially those cyber incidents become, you know, patient safety, you know, matters and issues. This reinforces why resilience, not just prevention, has to be a top investment priority. And, you know, one thing I'm thinking in this space, Brett, I don't know if you are, but do you think that we're under-investing in recovery compared to prevention? Absolutely. I think that resilience and recovery are something that we've talked a lot about, but this really is an example where we're seeing that people aren't really testing it. I think that we're seeing them look at recovered and resilience as the same thing. And it's not. The ability to withstand the attack, come up on a different area and get back to running quicker than your recovery allows, I think is really what we're going to see the areas shift to. So again, I think this is a great story to keep in the headlines for your team. And I would just say, just a quick plug, this Thursday coming up on Defense In Depth, we're actually digging into all of this. What do we actually mean by ransomware recovery? How are you actually testing it? What that actually means and what it looks like for your organization? So be on the lookout for that in your podcatcher of choice for defense in depth. Shameless plug. And last story here for Nowhere No, Apple fixes iOS flaw exploited by the FBI. Apple has released an urgent iOS update to fix a security flaw that was reportedly used by the FBI to recover deleted messages. The issue wasn't in apps like Signal itself. The end-to-end encryption there is rock solid, but in the iPhone's notification system, which stored message previews even after messages were deleted or the app itself was removed. Investigators were able to access those remnants through the device's internal database. Of course, they had to get access to the phone, but that's a separate issue here. But, you know, Mike, a secure protocol let down by problems at an endpoint, that sounds deeply familiar for a lot of cybersecurity scenarios here. Do you want to know more about this kind of, like, overall story? Obviously, Apple patched this one particular instance, but does that end it for you? Or is there something of interest beyond that? It is something to know, something to keep in the forefront, because until Q-Day happens right now, encryption isn't going to fail, but the endpoints do. So that, to me, is a classic example of strong cryptography is being undermined by endpoint artifacts. You know, I'd want to know more because it reinforces, you know, a core lesson. You know, secure systems can't, you know, they can still leak data through unintended persistent layers. You know, from a security leadership perspective, this drives the need for, you know, full stack visibility. You know, not just app security, but also the OS level behavior and, you know, the forensic residue that it's leaving there. So, you know, the more you know, right, it's the telemetry around what's happening. But like I said, until Q-Day happens, we can, you know, we can trust the encryption is happening, but we need to keep it in the forefront. Right. It's got to get to an end of a pipe somewhere. And therein lies the rub. Brett, what about for you? How did this story strike you? It's completely normal. And that's the problem that we're dealing with, right? So I think, to me, there's two moves I'm making and talking to the team about. We want to make the OAuth sprawl visible. And I don't think a lot of orgs can list every app their employees have granted access to. Google Workspace admins can can audit it, but you can't govern what you can't see. And then the second thing is you have to stop with the allow all as a default. And this goes to security teams, development teams, and vendors. How many times are you bringing in a tool? And if you think about it, that model is broken in practice. We're not reading the consent screens. The fix is always give this tool admin level access so that it can do what it needs. It needs an admin account to do all of these things. And if you actually talk to the engineers and talk to the vendor and talk to your developers, you'll find out that's not what they need. So I would require admin approval for any app that's requesting for this particular existence, right? Drive, Gmail, Workspace-wide access. And then, and Google, I think, supports that. And then most orgs, from what I've read up on, haven't even turned that on. So I think we have to get better at how we handle these things. So kill the allow all and make the OAuth sprawl visible. And if you do that, I think you're going to get ahead of the problem. All right. Before we move on to our bigger discussions of the week, have to spend a few moments today and thank our sponsor who helps make this all happen. And that today is Threadlocker. Threadlocker is extending zero trust beyond endpoint control. With their recent release of Zero Trust Network Access and Zero Trust Cloud Access, access isn't based on credentials alone. It requires the right user, the right device, and the right conditions. Because as we've seen in recent large-scale CRM breaches, stolen credentials and misconfigurations can expose massive amounts of data. With ThreatLocker, nothing is exposed, and access is limited to exactly what's needed. Learn more and start your free trial today at ThreatLocker.com. All right, let's dig in to one of the bigger stories. I saw this. I said this was all over LinkedIn. And so I was like, I should probably see what's going on here. Vercel confirms breach and stolen data is for sale. Web infrastructure provider Vercel, I'm getting that right, right? This is like an hors d'oeuvre situation for me. So I apologize if it's Verkel or something. Has disclosed a breach tracing back to a compromised third-party AI tool called Contacts.ai. An employee installed its browser extension, signed in with the enterprise Google account, and attackers who had already infiltrated context AI through credential stealing malware used the OAuth access to pivot into Vercel's internal systems and access some customer environment variables. For services and open source projects like Next.js appear unaffected and Vercel is working with Mandiant on the investigation. A threat actor claiming to be shiny hunters says they're selling the stolen data for two months. And then right after the story came out, Vercel also said they had a completely separate data breach that exposed customer data, but didn't say if it was related, but that was just the most recent update. Vercels Breach started with one employee installing a browser extension and clicking allow all on an OAuth grant. That feels like within the normal milieu of enterprise behavior. I'm curious, Brett, for you, where we're just talking about this allow all behavior, how can we govern OAuth token sprawl across an org when the pace of AI tool adoption isn't exactly slowing down? Yeah, so I think it comes back to, right, you have to get rid of the kill, the allow all. You have to make the OAuth sprawl visible. I think what we're seeing is OAuth tokens are the new lateral movement. The breach didn't have an exploit. There was not a zero day to it. It wasn't phishing. So you're looking at a developer's personal session and the environment variables are what allowed the breach to occur. So we have to get better at this. I don't have the single answer to it, but I think what I said earlier still stands. Make the OAuth sprawl visible and kill that allow all. Mike, what about for you? I mean, is, is, I guess, are you in agreement? And then how do we, as, as an organization, as in a relationship with the vendor, like how, how are we starting to, to move toward removing that allow all? Well, it starts with visibility. I do agree with what Brett said. It's, you know, that's the new, a loss is his new lateral movement layer. And, you know, the, the fastest way to reduce that lateral movement risk is visibility. You can't, You can't govern what you don't see and what you don't have visibility to. So that has to be a principle in governing and understanding and having process around users and even admins implementing those sort of permissions with these AI tools on your browsers and other places. it's, you know, once these AI tools start exploding, and they are, you know, the users by default are just clicking accept or clicking allow all without understanding what it's gaining access to or what they're accepting. And that's a significant risk. It starts with visibility, like I said, but you have to put some process around it. start removing some of the admin access and, and endpoint protections that, that, that are out there. Yeah. And that, that, it feels like the gravity of that, right. Is so hard to resist, right. Because all of these tools benefit from the flatter you can make all of your data for these tools to access. Like they're, first of all, they all begging and Brett to your point they all begging for it Right They all asking you to click on like to it autofills allow all on Claude I can tell you that right now And not even I mean you know talking back to iOS security, at least they're usually, when it's like location sharing, it gives you a little granularity of allow only when you're using the app or something like that. So yeah, it feels like you're going against so much, I don't want to call it a dark pattern, but like where the app, where these services want you to go, right, is right to where it can do the most damage the fastest. Yeah. So think about it, right? Shadow AI is already worse than the shadow SaaS that we saw, right? And then the tools are asking for broader scopes. They want to read your whole calendar. They want to read all of your email. They want to look in all of your drive. And I feel like we're at consent fatigue and the problem got 10 times worse over the past 18 months. And governance can't be block everything. So now we've got to go to inventory weekly, approve explicitly and revoke aggressively. And we're just, you know, it's securities playing catch up. I think the business is playing catch up. It's something that we're going to have to get ahead of. All right. Next up here, contagious interview scams self-propagate. According to research from Trend Micro, North Korean threat actors are evolving the contagious interview scams into a self-propagating supply chain attack using fake job offers to trick developers into running compromised code that spreads malware through repositories. The campaign is attributed to the group Void.KB and uses malicious VS code tasks and hidden repository files to deploy rants, steal credentials, and affect downstream projects when code is shared. This can rapidly cascade across open source and enterprise environments, and more than 750 infected repositories have been now identified. Mike, you know, self-propagated supply chain attack is new to the palette for me. Doesn't taste great, I'm not going to lie. The initial approach here seems tried and true. I mean, we've seen these job scams. These are nothing new, but a lot more sophistication now on the back end. Previously, some of these supply chain acts have seemed pretty targeted. Threat actors want access to one maintainer, one project where they can do a lot of damage here. But are we ready for something that has more of this wormable potential that's going for as much breadth as depth at this point? Does this change the game at all for you for these kind of attacks? Well, I think, yes, it does. We're, you know, what we're watching here with this is supply chain attacks are evolving from a surgical strike on a particular target or a company with a, with a, you know, to, to really becoming scalable weapons. And that that's a shift from precision to propagation. And in that scale is, you know, historically you see supply chains, these attacks are highly targeted and, you know, very sophisticated, you know, now that they're able to scale that out, you know, you're seeing that worm-like characteristics where compromise spreads, you know, through developer workflows and repositories and, you know, vendors that you don't even think are going to be impacted. But you're now having to look at not only your controls, but all of the vendors, all of the developers, all of the sole supply chain in ensuring those controls are in place. You know, the scanning, the environmental controls, the behavioral monitoring of build pipelines, all that has to be taken into account all the way through to rollout. But, you know, it's, you know, it's, is it, can you say that your developers are the weakest linkers that you're tooling? And I think you need a combination of both, you know. I mean, Brett, from your perspective, are we ready for that kind of build-out or those kind of assumptions to be ready for this kind of stuff? Absolutely not. No. Thankfully, right, the proof of concept just shipped. So we don't really get a choice and we're going to have to go get ready for this. But if you look at it, traditional supply chain attack, one to many, you compromise one upstream, affect everyone downstream. Now we're talking about a many to many. Even the infected developer infects their own repos, which infects the contributors, which infects their repos. The curve is exponential. Go figure. Got to love that. It's not linear. And it's going to ride on trusted workflows, get clones open in VS Code. The security tools that we have today are not going to flag that as suspicious. And so if I'm looking at this now, the teams are going to have to start looking at what they have to add to every repo. They're going to have to stop shipping the workspace configs the way they exist today. You're going to have to block workspace trust auto execution things. And then you're going to have to enforce signed commits. That's at the top of my head. I mean, there's probably more you're going to have to do. But this absolutely changes the game on what we're doing. And it just seems to be the theme of 2026. We are no longer looking at linear attacks. Everything is growing exponentially. Yeah. And I mean, kind of the sub theme, I guess, for that, because that certainly is like every story is touching on that where the things that were the limiting factor that few at least they can't do it to everyone all at once is no longer a thing um but this like to me the thing i am looking at the most this entire year is how open source as like a tool that businesses rely on as a as a community effort like how it handles being stressed on seemingly every part of its of every assumption that open source is making from maintainers to the ability, you know, for the crowd to look for vulnerabilities, you know, like every assumption that we are seeing with open source is kind of being challenged all at once as part of, you know, Brett, what you were just talking about with, you know, everything suddenly turning into many to many situations with attacks like this. And that is something I would definitely want to keep on the Department of Know, keep our fingers on the pulse on. because we've talked about this on other shows. You know, open source is open source. It's not going away, but it feels like it will also never be the same after what, you know, it's transforming into, it has to, to continue to exist, I think. So definitely something to keep an eye on. The other thing to keep an eye on is our last discussion story of the day. And it's a new one to me. Again, new things to the palate, the cybersecurity show, not always the best thing, but AI generated ghost breaches. Now, I say AI-generated ghost breaches. You may say, what the heck is an AI-generated ghost breach other than a pathetic attempt at SEO? Well, it's coming after my cybersecurity headlines hard here because these are false but convincing breach stories that trigger real-world crisis responses. A CyberScoop article highlights cases where entirely fictional events were reported as real, old resolved breaches resurfaced as new, and AI-generated quotes were falsely attributed to experts. These could potentially waste cybersecurity resources, damage reputations, influence regulators and investors, and even help attackers make phishing or impersonation campaign more believable here. This is just one as a producer of a cybersecurity news show, I need to be conscious of, be on the lookout for like more so than ever. I'm curious, though, Brett, for you as an organization, I can see how this could be used to certainly waste resources, right? Like that's damaging enough here. We only have so much attention. But have you seen anything like this in the wild here? And is there a way to perhaps more directly weaponize this other than we made company spin their wheels on nothing here? I think when I read ghost breaches, I looked at that and saying, okay, does something exist if Gartner hasn't created the magic quadrant for it yet, right? This is already happening. It just wasn't being counted as ghost breaches. So if you remember, I think back in 23, 24, a few years ago, where there was the fake breach notification that was sent to the state of Maine, but there was actually no breach, right? So to me, I mean, we've seen things like this. I just don't think they were counted as ghost breaches. And now we've given it a name. So now here comes the magic quadrant for it. But following someone's incident yesterday at this company, please reset your credentials at this link. The whole thing is fabricated. Recipients are going to sit there and go, yeah, yeah, I think I heard about a breach. And now they're going to go ahead and put their information in. you know we're going to we're going to generate plausible looking sample data and it's going to claim that a breach has happened and it's going to demand for payment you're going to have people and companies who are maybe not necessarily prepared don't have the right contacts and they're going to you know panic and pay and so I think you're going to see these things more and more but I think we've already seen good examples of them we just didn't have a category for it so thank you to whoever created ghost breaches. Now we have a way to quantify it. Mike, are you getting spooked out by ghost breaches or is this established practice by another name and perhaps more scale? Well, I think, you know, back to, you know, Brett's example of what happened in Maine, you know, how do you defend against something that's not real, right? I think it's, you have to focus on validation, the speed of validation and your trusted sources of truth and not just detection. You know, it's, you know, it's a test. You know, I think threat actors are going to be looking at this to say okay let see what they do when we do this when we pull this lever or push that button you know the next breach might not be real but you know the impact will be You know this is you know to me, it's fascinating and dangerous because even a fake breach, as Brett said, it's going to trigger a response. It's going to trigger regulatory scrutiny, reputational damage. you know I haven't seen it widely you know operationalized yet but tomorrow's another day right it's absolutely coming I think it's absolutely coming threat actors could combine this with you know the phishing the market manipulation yeah I mean I think about prediction markets right like all of a sudden you know you have a Calci thing oh company x will have a reported security breach and somebody gets a, you know, a payout, like, like just there, I guess there are one more mechanisms to, to very quickly put out plausible media. You know, I mean, this is essentially like a multimedia deepfake, you know, potentially, and there are now even different ways to monetize it beyond just we're jerks and we want to, you know, cause someone to get audited or, or at least have their comms team have the worst night ever. But I even think about, I mean, there was a story in headlines today, or not today, this week, that was about lovable, right? And they were denying that they had a data breach, some API chicanery based on unclear terms of service. Basically, there was no leak. Things were working as functioning. It was just the functioning was bad. But I could totally see, you know, that having a life of its own if you just wanted to cause pain for them, right? And it feels like every company now, particularly on the comm side, like this to me is almost as much as a, how linked is your cybersecurity structure, your IT structure throughout the organization, right? To make sure you're getting that communication up to your customers who may see that in the news, right? Who may see something get picked up by an outlet that doesn't have a lot of great scrutiny, but it gets a ton of run on social media. And all of a sudden you have people worried about that. I feel like this is potentially a way to make sure that that relationship is strong enough to quickly be able to respond to that kind of stuff. Because that's where I could see it having the most damage of, oh, security said there was nothing, but we didn't tell our customers that. And now all of a sudden, everyone's mad at us. And that could cause some very serious harm. So I think you hit on a couple of great things, actually. So what's the comms department going to do? And so you're going to have to add a fabricated incident now to your IR playbook. Who's going to authorize a public denial and what the holding statement is actually going to say? And just like we would say on a real crisis, right? Deciding that in the middle of it is not going to work. So if you look at your old playbook, something real happens, then you respond. New playbook, something might be real, something might not be. And your 30-minute response window now is 30 seconds. And what are you going to do about it? Now, if we want to combine sort of our different shows here and go back to the worst, the best worst idea, right? We just had in the news where that soldier would bet on Maduro. So what if you have people start betting on companies that are going to get hacked and then start publishing fake hacks to get paid out? I guarantee what one I guarantee that's already happening. Right. And then and then to like, again, the all of a sudden there is this massive incentive structure. Right. For something that doesn't actually require you to have any technical acumen other than writing a convincing story and getting it some run somewhere. exactly and and there's enough ambiguity of what like you know we i i my favorite like non-headline is whenever they publish oh there's a there's a leak of 100 billion credentials right and it's just an amalgamation of old data leaks like you could very again with very little effort say we have this data that was from this organization and we could we could turn that into an incident that just happened. It gets some run. It's a nothing burger to this, you know, to the CyberScoop article's point. But all of a sudden, I just made a couple hundred grand on CalSheet or something like that. So this is going to, this has got to win me the best worst idea on the, on the, right? So I want this counting as the best worst idea of you found out there's a breach or a potential breach. You're going to put some money down on the poly market so you can make millions. And then regardless of the outcome, you're a winner. So this is, it's not a, you're not a white hat. You're not a black hat. You're like a green hat, right? You're just like, you're just trying to make money. That's, uh, you're just trying to protect your, your, your wealth. You know, that's your, uh, that's your, that's your spokesperson, spokesperson is your comms. And like, you know, like I want to go back to it. You're defending against something that isn't real. So why not put the spin on it right now? If I was putting the 10Q together, for example, on a publicly traded company, I'd say, hey, we're noticing an uptick on ghosting and ghost air breaches. So we're trying to defend that by running tabletop exercises. The story is already curved before it even happens. And it's all about spin, and you might as well put the spin on it before it even happens. That way you're saying, hey, we predicted this is going to happen. So if I was looking at a competitor that was going to go public, you know, or something like that, that would be a target, right? Because they're going, you know, their stock is going to go up. Curb it by, you know, pushing an AI-generated ghost breach on them. And there's also a bunch of AI companies about to go public. Oh, I mean, like, oh, the mind reels. Guys, why are we giving people bad ideas? So this is a shout out to Jay Schmooze in the chat, who, Brett, gave you the best worst idea. I waited for David to second that, but I'm sure that's coming. David's just crying at the idea, at the integrity that this will, the stress that this will put on cybersecurity headlines will no bounds. Unfortunately, we're just about out of time here on the show. But before we get out of here, Brett, I want to know from you, what's one piece of advice that you can maybe pull out some positivity, the power of positivity that we can share with our audience for what advice you got. Yeah, I think that despite everything you're hearing in the news and the speed of which everything is going on, it all is always going to come down to focus the team, get them aligned. And it's always going to be about reducing that risk surface. We can't fix everything all at once, but we can look at what's going to make the biggest impact. And positive side of this is it's still mostly focused around the identities. So that's where I would like to see teams focus and companies focus right now. Mike, what about for you? What advice would you have for our audience before we head out? I think that comes on the tail end of these stories that we talked about today and tabletop exercises, at least talking through the scenarios on the what if and are you prepared for the right actions, the speed of those actions and what's needed. you know, to make sure your response is, you know, is true, you know, to the threat. I think you can't overemphasize the tabletop exercises. There's enough meetings people have today that could be emails. But why not? When I have a very constructed, well thought out tabletop exercise on some of these things and think through it, what's your response going to be? What's your next step of action? Who are you escalating to? I think that's the thought I'd leave for the audience. I think words of sage wisdom from both of you. Thank you so much. And thank you also to our audience for having some fun in the chat today. Shout out to Bone Circuit in our chat here, sharing some interesting AI agent setups and stuff like that. New Face, haven't seen you there before, so I hope you can make it to another Department of Know next week. Some of our other favorites, though, Jay Schmooze, like we already mentioned, Kevin Farrell, the big boss man, David Spark, all having some fun. and just making it a really, really fun place to hang out. So make sure you join us each and every Friday, 4 p.m. Eastern. Get involved in the chat and have some fun too. Thank you, Michael Bickford, the former CISO over at the New York State Gaming Commission and Brett Conlon, CISO at American Century Investments. Truly appreciate having you on the show. Two of my favorite people to have on. So we'll have to have you back on before too long. If you want to follow them on LinkedIn, we have their links to their LinkedIn profiles. I said the word link way too many times. That's okay. They're in our show notes. You can check them out there. Give them a follow. They are good. Folks, thanks also to our sponsor for today. That is ThreatLocker. Remember, you can send us feedback anytime. Feedback at CISOseries.com. Remember to join us next Friday, 4 p.m. Eastern for another edition of the Department of Know. And be sure to register for our next Super Cyber Friday event coming up on May 1st, hacking the death of entry-level jobs. Are they dying? Open question. We will get some resolution or at least some opinions on that. that's at 1 p.m. Eastern. So go to CISOseries.com slash events to register for that. Get in on that. We have a vibrant chat room on there as well. We play some games. You can win some swag. It's a good time. Thank you so much for joining our Friday standup. Have a great weekend and stay secure out there. Until the next time we meet for myself, for our wonderful producer, Josh, and for all of us here at the CISO series, including the big boss man, David Spark, here's wishing you and yours to have a super sparkly day. Cybersecurity headlines are available every weekday. Head to CISOseries.com for the full stories behind the headlines.