You're listening to the Cyber Wire Network, powered by N2K. No, it's not your imagination. Risk and regulation really are ramping up, and these days customers expect proof of security before they'll even do business. That's where Vanta comes in. Vanta automates your compliance process and brings compliance, risk, and customer trust together on one AI-powered platform. So whether you're getting ready for a SOC 2 or managing an enterprise governance risk and compliance program, Vanta helps keep you secure and keeps your deals moving. Companies like Ramp and Riders spend 82% less time on audits with Vanta. That means less time chasing paperwork and more time focused on growth. For me, it comes down to this. Over 10,000 companies, from startups to large enterprises, trust Vanta to help prove their security. Get started at vanta.com slash cyber. Mythos leaks. The DOD preps a more aggressive cyber strategy. A former FBI cyber official urges homicide charges for hospital ransomware deaths. Lotus Wiper targeted the Venezuelan energy and utilities sector. Over 1,300 SharePoint servers remain unpatched against a spoofing vulnerability. The Harvester APT Group deploys a new Linux version of its GoGraw backdoor. A new Lotus Lite backdoor targets India's banking sector. The Mirai botnet exploits discontinued routers. Our guest is Brian Vecchi, field CTO at Varonis, discussing how organizations can safely adopt AI and autonomous agents. and a satirical startup sells cleanroom clones. It's Wednesday, April 22nd, 2026. I'm Dave Bittner and this is your CyberWire Intel Briefing. Thanks for joining us. It is great as always to have you with us. In news that should shock almost no one, a small group of unauthorized users gained access to Anthropik's unreleased Mythos AI model, despite the company's efforts to restrict it to vetted partners because of its potential cybersecurity risks. According to a person familiar with the situation and materials reviewed by Bloomberg News, the users access the model through a third-party contractor environment and online investigative techniques, including scanning unsecured resources like GitHub. Anthropix says Mythos can identify and exploit vulnerabilities across major operating systems and web browsers, which is why it's being distributed only through its limited project glass-wing testing program. The company stated it's investigating the reported access and has no evidence that its core systems were affected. While the group reportedly used Mythos for benign experiments rather than cyberattacks, The incident highlights how difficult it can be to contain powerful AI tools and raises concerns about whether other unauthorized parties may also have access. Mozilla says Anthropics' Claude Mythos preview identified 271 vulnerabilities in Firefox, though only three received CVE designations in Firefox 150, suggesting most were lower severity issues. Mozilla noted the bugs were within the reach of elite human researchers, not entirely novel flaw classes. Palo Alto Networks reported the model performed roughly a year's pen-testing work in under three weeks, highlighting growing enterprise risk from advanced AI-driven security tooling. The Defense Department is preparing a new cyber strategy aimed at aligning military cyber operations with the Trump administration's more aggressive approach to digital adversaries. Officials say the plan will integrate cyber capabilities across all warfighting domains, strengthen operations below the threshold of armed conflict, and advance the Cyber Command 2.0 effort to modernize cyber forces. The strategy builds on the White House blueprint calling for expanded offensive and defensive cyber actions to impose costs on adversaries and improve coordination with the industry. Senior Defense Department officials told lawmakers a roughly $1.5 trillion budget request prioritizes expanded cyber forces and digital warfare capabilities to counter increasingly disruptive nation-state threats. The proposal includes $20.5 billion for cyberspace operations, supports the Cyber Command 2.0 restructuring effort, and funds zero-trust architecture and infrastructure protection. Officials said cyber is now central to military modernization and deterrence, alongside $58.5 billion for AI and command and control initiatives, while workforce shortages and organizational coordination remain ongoing challenges. A former FBI Cyber Division official urged the Justice Department to consider felony homicide charges when ransomware attacks on hospitals contribute to patient deaths, arguing penalties should match the severity of harm. Cynthia Kaiser also called for possible terrorism designations for groups that repeatedly target health care providers and urged Congress to restore funding for state and local cybersecurity programs facing cuts. Lawmakers and experts warned that reduced support for the Cybersecurity and Infrastructure Security Agency could weaken ransomware defenses, citing workforce losses and the suspension of its pre-ransomware notification program, which previously warned thousands of organizations of imminent attacks and helped prevent billions in damages. Witnesses said continued funding, information-sharing authorities, and defensive investments remain critical, despite some progress against ransomware threats in recent years. Researchers at Kaspersky warned that a previously undocumented wiper malware called LotusWiper has targeted the energy and utility sector in Venezuela in a destructive campaign likely intended to permanently disable systems. The attack used two batch scripts to weaken defenses, coordinate execution across networks, and retrieve the final payload, which deletes restore points, overwrites physical drives, and systematically erases files. The absence of ransom demands suggest a targeted non-financial motive. Kaspersky reported no attribution but noted the activity coincided with regional geopolitical tensions in late 2025 and early this year. The execution chain relied on legacy Windows features and network-based triggers, indicating prior access and familiarity with the victim environment before deployment. More than 1,300 internet-exposed Microsoft SharePoint servers remain unpatched against a spoofing vulnerability previously exploited as a zero-day and still used in active attacks. The flaw affects SharePoint Enterprise Server 2016, SharePoint Server 2019, and SharePoint Server Subscription Edition, and allows unauthenticated attackers to conduct network spoofing through improper input validation without user interaction Successful exploitation could expose sensitive information and enable data modification though not disrupt availability. Microsoft released patches this month, but Shadow Server reported limited remediation progress. CISA added the vulnerability to its known exploited vulnerabilities catalog and ordered federal civilian agencies to apply fixes within two weeks, warning the issue poses a significant risk to government networks and is a common attack vector. Researchers report that the Harvester Advanced Persistent Threat Group has deployed a new Linux version of its GoGraw backdoor that uses Microsoft Graph API and Outlook mailboxes as covert command and control infrastructure to evade detection. Symantec linked the malware to earlier Windows campaigns based on shared code and identical errors, indicating expanding cross-platform tooling. The backdoor uses social engineering with disguised document files for delivery, persistence via SystemD auto-start entries, and encrypted email-based tasking and data exfiltration. Initial samples were submitted from India and Afghanistan, consistent with Harvester's historical focus on South Asia. Analysts observed no confirmed victims but assessed the campaign as targeted espionage activity, leveraging legitimate cloud services to bypass perimeter defenses and maintain stealth Researchers at Acronis identified a new Lotus Light Backdoor variant targeting India's banking sector, delivered through DLL sideloading using a legitimate Microsoft-signed executable The malware communicates with a dynamic DNS command and control server over HTTPS and supports remote shell access, file operations, and session control, indicating espionage activity rather than financial crime. Code similarities confirm continuity with earlier Lotus Lite builds. Analysts assess moderate confidence links to Mustang Panda, noting a shift from earlier delivery methods and a geographic pivot from U.S. government targets to India's financial sector. The Mirai botnet is actively exploiting a command injection flaw in discontinued D-Link routers, according to Akamai. The vulnerability allows attackers to execute malicious commands through crafted POST requests, enabling payload delivery via shell scripts with typical Mirai features such as XOR encoding and hard-coded infrastructure. The affected devices no longer receive updates, and D-Link has advised retiring them. Researchers also observed targeting of TP-Link and ZTE routers, highlighting continued widespread reuse of Mirai source code in opportunistic botnet campaigns. Coming up after the break, my conversation with Brian Vecchi from Varonis. We're discussing how organizations can safely adopt AI and autonomous agents, and a satirical startup sells cleanroom clones. Stick around. sales, engineering, Chris the intern without security even knowing about it. That's where Nudge Security comes in. Nudge finds shadow AI apps, integrations, and agents on day one and helps you enforce policy without blocking productivity. Try it free at nudgesecurity.com slash cyberwire. Maybe that's an urgent message from your CEO. Or maybe it's a deepfake trying to target your business. Doppel is the AI native's social engineering defense platform fighting back against impersonation and manipulation. As attackers use AI to make their tactics more sophisticated, Doppel uses it to fight back. from automatically dismantling cross-channel attacks to building team resilience and more. Doppel, outpacing what's next in social engineering. Learn more at doppel.com. That's D-O-P-P-E-L dot com. Brian Vecchi is field CTO at Varonis. I caught up with him at RSAC 2026 for this sponsored Industry Voices discussion about how organizations can safely adopt AI and autonomous agents. Before we dig into our topics here, how's the week been for you so far? An absolute blur. I don't know how many hundreds, I think it's in the hundreds of customer meetings we have this week. Wow. This is my 15th or 16th or 14th, I don't know, something like that, RSAC. Yeah. They're always a blur. This one seems to be even more of a blur than the ones previously. That's both exciting and draining. I can concur. Well, AI is the hot topic. What's top of mind for you when it comes to AI's intersection with cybersecurity? What's really interesting is we're kind of in a new phase of AI. Like we started talking about AI and data security and how data security and AI security are so closely intertwined like three or four years ago. And what's interesting is the pace of change has accelerated. Now things change not in six decades or six years, but in six days. Things happen extremely quickly. The number of conversations that focus, you know, I talk to security people. You know, the chief information security officers and the architects and the AI security architects. And they're all in a position where their organizations, I don't want to use the word businesses because they're not all businesses. There's nonprofits, there's government organizations. Everybody has a mission, though. Something they want to accomplish. And their leaders, not the security leaders, the business leaders, the organizational leaders, they want to move more quickly than the security and governance capabilities can possibly keep up with. Three years ago, it was conversations around how do we enable our workforce to use AI tools like Microsoft Copilot or ChatGPT. Those questions and those issues are still relevant, but those are problems from two or three years ago. The problems today are I'm now deploying thousands of agents. These agents are autonomous and non-deterministic, which is a fancy way of saying I don't know what they're going to do. I'm going to define an agent to try to automate a business process, but these things, and I try to be careful about anthropomorphizing them too much. They don't make decisions, but they are non-deterministic in that you give them an input like a prompt or an interaction with another agent or a system or a workflow. You can't predict what the output would be, which makes it fundamentally different from the user and application security that we've seen in the past. It's kind of an innovation gap where the need for governance and security controls, and governance is a problematic word, governance is intent. Governance is not a control. It's not a capability. It's intent. I would like things to be governed. But the need for governance and security in order to enable the space of change just hasn kept up The other thing that really interesting is that AI tools like ChatGPT and Microsoft Copilot and Claude which have become part of our daily lives at this point they extremely smart They're extraordinarily capable because they've had the internet to learn from. Organizations are looking at that and saying, why can't we do that? Why can't I use AI? Not to replace my knowledge workers, but to make them dramatically more productive. And there's a really interesting answer to why not, why they can't. And it's because the public models, the AI tools that we have come to rely and depend on, have access to the internet. Organizations can't build their own models or leverage agents because the AI tools that they're using are only leveraging 3% of their data. So they're not as smart and they're not as capable. Let me ask the question, why not? Why can't they use all of their data? There's three reasons. One, organizations struggle to secure their data. If I unleash an agent without any kind of governance or access to my data, I'm introducing risk. Like, I don't know what's going to happen. Agents can get phished like human beings. They can be co-opted. So I need data security. That's a huge problem. Organizations struggle with data security. I know that because I've been working in data security for 16 years. Second thing is AI security. How are these agents and these models and these code libraries and the tools that people are using, are they properly secured? Because if the answer is no, guess what? Somebody's going to put a big hand up and stop your AI usage. The third thing is your adversaries have access to all these same tools. Anthropic released a report in November talking about a complete, almost completely automated AI hacking campaign. 80 to 90% of the work of reconnaissance, identifying targets and then penetration and a lateral movement, privilege escalation, all of the things that an attacker would normally do was handled by AI agents and it was broken up into discrete parts. You put that together, the adversaries have these powerful tools as well and they're innovating faster than security teams and organizations can possibly keep up. You put all of that together, there's an innovation gap from a security and a governance perspective, and organizations are unable to deploy these tools to keep up. It's stopping them from outpacing their competitors. It's stopping them from leveraging the benefits of what could be extraordinarily powerful tools. I may have just said two dozen different things, and I hope all of that makes a little bit of sense. Those are the conversations that we're having this week. I know it's a lot to take in, and there's no wonder why folks feel like their heads are spinning. I want to loop back to something you talked about, this kind of tension between the powers that be at the organization versus the security team. And the leadership, the business leaders saying we must go, go, go with full speed ahead with AI. To what degree do you think that is fear that their competitors are throwing caution to the wind with this and they're full speed ahead? So even if there are risks that we don't know about, we can't be left behind. I think that's a big part of it. But the problem is as soon as you hit a major roadblock, you suddenly have to go. It's like one step forward, two steps back. I've been telling an AI security story for a few years now. but I tell it over and over again because every time I tell it people are like wow that sounds crazy one of the big financial services companies was piloting an AI tool in this case it was Microsoft Copilot which is incredibly powerful I use it every day I'm building agents using Copilot Studio now but they were piloting Copilot and one of the things that big banks do if they want to measure the ROI of a new tool especially one that's supposed to make people productive you know who they give it to? The traders. The guys on the trading floor. I used to work at UBS. And UBS had the biggest trading floor in the world in Stanford, Connecticut. Okay. And I would go there because I was in architecture and we were like level three or level four support. There's two things that were really interesting about the users that are traders at a bank. One, if they have a problem, it was like 90 seconds and we had a human being like a body at the desk. Because if they can't work, bank doesn't make money. The flip side of it is, is if you can make them more productive, the bank makes more money. So if you ever go to one of these trading floors, you see these people have nine monitors. Like they have the latest devices. They get all the best support for good reason. So you want to test a productivity tool, you give it to one of your traders or a group of them, and you say, well, does it help? Does it make them more productive? Because if it does, the bank will make more money. And they gave, in this case, Microsoft Copilot, which is an incredibly powerful tool. It's a large language model, but it is also, it has access to, if you prompt Microsoft Copilot, It has access to all your emails, all of your files, everything that you collaborate with, with your team and other people in the company. It's incredibly powerful. You can ask. It's a search engine, but it is also a content generation tool. Like it's what we think of as a very powerful AI tool. And one of these traders asked what I think is a really interesting question. What stocks do our employees invest in? Listen, the bank's got a couple hundred thousand employees. These are smart people. We hire smart people. What do they invest in? Maybe that'll tell me something. Maybe when they get paid or their bonus schedules might inform, I don't know, something about the market. And this trader had been using Chachapiti, so what he expected was, like we would all expect, you know, a few paragraphs or a few sentences, a summary of some data or some analysis. Somebody's done this report somewhere, right? Or maybe Copiled's smart enough to figure it out and do this for me because I've seen some pretty impressive output from some of these AI tools. So when he asked what stocks do our employees invest in, he got, instead of a couple of paragraphs of text, he had a big table. And in this table were names and social security numbers and account numbers and positions of employee 401ks. That is the reaction every single time I told this to you. I was really afraid you were going there. Go on. And what's interesting is it could have been a hallucination. It could have been, you know, sometimes because these tools are designed to give you an output that looks real or looks useful. But it didn't matter because they immediately had to shut it off. I learned of this story from their vice president of their modern workforce, whose job it was. It was her job to deploy these tools and measure their value. And she said, this is a privacy nightmare. Like, we could get sued out of business. I told this story to architects that really understand how these large language models and these AI assistants work. And they kind of pushed back. They challenged me a little bit. They said, Copilot doesn't punch holes through access controls or systems. It doesn't give you access to information that you don't have access to. It's not going to punch a hole through your employee retirement plan system and tell you that's not how it works. And I said, I know. Because in this case, somebody on their compensation team had created a spreadsheet of employee 401k information. And she had saved it at TeamSite, which is exactly what she was supposed to do. And she clicked the share button, which is exactly what you're supposed to do. Frictionless collaboration. We work from anywhere, from any device. We share with each other. And she didn't share it with everybody. She just shared it with people on her team. It was a distribution list or a group or something. The problem is inside that group was an entity in Microsoft 365 called everyone except external users, which is a fancy way of saying when she clicked that link to share it with what she thought was a small number of people, she opened it up to everybody. Not her fault. The group that she was sharing with had just been misconfigured. It could have happened years ago. Who knows? What was interesting, when we talk about risk, because we're at RSAC, cybersecurity is about risk. It's about risk management, right? We, I work for a software company. We sell software to help others manage and mitigate the risk to get to security outcomes. Risk is when you break it down a, it's a factor of two things. What's the impact of something happening, right? The impact of loss. Well, and also what's the likelihood of it happening? That spreadsheet existed. It was in a team site somewhere. It was open to everybody. But up until you gave someone Copilot in order to find it and see it somebody would have had to go looking for it kind of digging through this Byzantine morass of SharePoint sites and files It never happened Until you gave someone the greatest information retrieval tool in the history of mankind Right That what Copilot does best That's what Copilot does. That's what AI does best. AI leverages data in ways that were both faster and more scalable than we've ever seen. But it means, this is what I say, that what stops companies, one of the things that stops companies from leveraging these tools is the fact that they struggle to secure their data. And now they struggle to secure the tools and the models and the agents and the code libraries specifically. And then the bad guys are using these tools as well. Imagine that one of the users at this bank got phished. It has nothing to do with AI until the identity that they have now taken over has access to Copilot or has access to agents. Or what happens when I phish an agent? Like these tools directly access data. The era of applications, even web applications, having interfaces with specific controls is going away five years from now. You're not going to log into a web browser to access this application, then another page for this application, and then you're just going to ask an agent to do something, and it's going to directly access all this data, which means the underlying data security and the security of the data itself, the infrastructure, and the agents is really all that matters. Everything else is just noise. So companies want to move quickly. They want to outpace the competition or they don't want to be outpaced. But, I mean, we're at a security conference. The security people realize there are new risks or it's old risks at a completely new order of magnitude. And the companies that succeed are the ones that, from a security perspective, I don't even like using the word governance because governance is just an intent. they actually secure things and they get to outcomes. Outcomes are what matter. Can you measure risk and reduce it and prove that you did it? Can you minimize how long it takes to detect and respond to a threat, even if that threat is completely agentic and completely automated? Not everybody can. The ones that can are going to win. So let's bring it home together then. I mean, what is your guidance for the folks who are anxious about this? What sort of advice are you giving the folks you interact with ways to head forward safely? I help people articulate to themselves and to their peers what successful outcomes are. And what I mean by that is, in Word RSAC, you go down to the Dexpo floor, there are dozens and dozens and dozens and dozens of tools. Most of them do one thing. If they do it well, they do one thing, and that's provide visibility and discovery. There are a lot of tools out there that are very good at showing you things that maybe you didn't know, which sounds great. Because if you ask security leaders what keeps them up at night or what their primary goal is, they'll say things like it's the unknown unknowns. It's not the problems that I know about. I can fix those. It's the problems that I don't know about that's going to kill me. but where we try to where i try to help leaders think is okay what's the let's look at the chess board you do discovery that's move number one what are moves number two three and four and moves two three and four are once you've done discovery you need to address findings and if the remediation and the the you you if you can't go from observability to remediation or from finding to fixing, we're going to be back in the same place as you were last year. You're not going to move. You're just going to have a bunch of findings that you don't have the people to address, even if you want to. And your adversaries are going to move more quickly than you. So that's step two. And then step three is you can't just discover problems. You need to monitor all of the behavior and you need to do it usefully. Everybody's drowning in noise. Alert fatigue is a real thing, findings fatigue are a real thing. But in security, context is everything. Because I don't want to just log everything. I want to log things with context, like what data is sensitive and how is it being used and who and what, because it's all the non-human identities. It's NHIs that we're worried about these days, the agents. What are they actually doing? And agents don't just interact with data, they interact with other agents and they interact with code libraries and MCP servers. There's all these things that you need to monitor effectively. but if you've got all of the right context, you've got context of identity, you've got context of behavior, you've got context of access, you've got context of the underlying data, all that context means you can minimize how long it takes to detect and respond to even an issue. It doesn't necessarily need to be a threat. That 401k example, that wasn't an insider threat. That wasn't an outside attack. It was somebody just trying to do their job. That's what organizations really want to be able to address. and then step four, like move four on the chessboard, can you prove you did it? Everybody's got a boss. Can you say, here's what we were trying to do, here's what we accomplished, here's how we measured success, and here's what we're going to do next. If you meet a CISO, you probably meet as many as I do. A few. Yeah, you've met a few. And you tell them, listen, I'm going to help you measure risk and reduce it, detect and respond to threats quickly, prove that you did it. CISO's going to do, they're going to be a hero. You're going to do an amazing, amazing job. So, listen, it's self-serving. I work for a security vendor. We make software. We help other people solve these problems. But however they're doing it, if they're not thinking in terms of outcomes, they're always going to be behind. All right, Brian, thanks so much for taking the time for us. I appreciate it. I really appreciate the time. This has been great. Thank you. All right. That's Brian Vecchi, field CTO at Varonis. And finally, a new website, malice.sh, offers, for a modest fee with a straight face, to liberate software from its licenses by using AI to recreate functionally identical versions without the legal baggage. It is both satire and, inconveniently, a real business that actually delivers cleanroom-style rewrites, inspired by the classic IBM BIOS cloning playbook, now automated at machine speed. Its creators say the point was to make the threat tangible, not theoretical, and the joke lands because it works. The project highlights a growing tension in open source. AI can now reproduce software faster than communities can maintain it, raising awkward questions about attribution, ethics, and sustainability. Critics warn these rewrites strip away the invisible infrastructure of open source, maintenance, security fixes, and shared stewardship. In that sense, malice is less a prank than a proof of concept and possibly a preview. And that's The Cyber Wire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to cyberwire at n2k.com. N2K's lead producer is Liz Stokes. We're mixed by Trey Hester with original music and sound design by Elliot Peltzman. Our contributing host is Maria Vermazis. Our executive producer is Jennifer Iben. Peter Kilpie is our publisher. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow. Bye.