ShadowTalk: Powered by ReliaQuest

Did ShinyHunters Compromise Vercel? Every CISO's Cloud Security Visibility Problem

26 min
Apr 22, 20266 days ago
Listen to Episode
Summary

The episode examines the Vercel breach attributed to ShinyHunters, where an employee's compromised third-party AI tool led to unauthorized access via OAuth integration. The hosts discuss why SaaS platforms have become primary targets for attackers and outline three critical defense layers—OAuth governance, token-level controls, and identity provider policies—that organizations must implement to close visibility gaps in cloud security.

Insights
  • 89% of organizations breached via SaaS believed they had adequate visibility, but lacked detection logic on existing logs—the problem is correlation and alerting, not data collection
  • SaaS compromise is economically superior to enterprise network attacks for threat actors: one tool breach exposes hundreds of downstream victims versus one environment per network compromise
  • OAuth refresh tokens bypass traditional conditional access policies designed for interactive sign-ons; detection must shift to token anomaly detection and app behavior monitoring post-compromise
  • The 34-minute breakout window is compressed further by AI-assisted reconnaissance; detection thresholds tuned on historical dwell times will miss modern attacks with accelerated enumeration and lateral movement
  • OAuth governance is a security decision, not an IT hygiene task; centralized admin approval workflows and pre-vetted app allowlists are foundational controls that require policy, not new technology
Trends
Third-party SaaS integrations becoming primary attack surface as threat actors exploit trusted OAuth relationships over direct infrastructure compromiseSupply chain risk shifting from vendor infrastructure to employee-authorized integrations; shadow IT acceleration through AI productivity tools outpacing security governanceAI-assisted attack acceleration compressing reconnaissance and lateral movement timelines from hours/days to minutes, requiring real-time correlation and lower detection thresholdsFalse confidence in security posture driven by mature logging in flagship platforms (Microsoft 365) masking blind spots in secondary SaaS applications (Salesforce, Workday)OAuth governance emerging as formal risk category requiring documented policies with admin approval workflows, scope-based gating, and periodic recertification rather than prohibitionShift from dwell-time-based detection to velocity and anomaly-based detection for API call volumes, consent events, and data access patterns across heterogeneous SaaS estatesDownstream victim exposure scaling: single SaaS tool compromise affecting 700+ organizations (SalesLoft/Drift example), creating cascading breach notifications and regulatory exposureIdentity provider conditional access policies requiring segmentation by access type: OAuth grants, token exchanges, and admin authentication each requiring distinct policy layers
Companies
Vercel
Cloud PaaS platform for frontend developers; compromised via OAuth integration with Context.ai, exposing employee cre...
Context.ai
AI workflow productivity tool with Google Workspace integration; compromised via Luma Steeler info stealer, used as p...
ShinyHunters
Threat actor group claiming responsibility for Vercel breach; known for targeting SaaS platforms via credential abuse...
SalesLoft
SaaS sales engagement platform; compromised via OAuth integration, exposing 700+ downstream organizations
Drift
Conversational marketing platform; compromised via OAuth integration, exposing 700+ downstream organizations alongsid...
Anodot
SaaS outage monitoring application; compromised by ShinyHunters, exposing sensitive data from dozen+ downstream organ...
Snowflake
Cloud data platform; targeted by ShinyHunters via credential abuse attacks
Google Workspace
Enterprise email and productivity suite; OAuth grant controls and conditional access policies discussed as primary de...
Microsoft Entra
Identity provider platform; conditional access policies and app governance controls discussed for OAuth and token-bas...
Salesforce
CRM platform cited as example of SaaS application with visibility and detection gaps compared to Microsoft 365
Workday
Human capital management platform cited as example of SaaS application with uneven logging and detection coverage
Rockstar
Entertainment company exposed via Anodot compromise; example of downstream victim in SaaS supply chain attack
ReliaQuest
Cybersecurity company providing GrayMatter agentic AI security operations platform; podcast host and sponsor
People
Brandon Serrato
Co-host discussing defense strategies, conditional access policies, and leadership recommendations for OAuth governance
John Diljan
Co-host analyzing Vercel breach mechanics, ShinyHunters tactics, and visibility gaps in SaaS security monitoring
Quotes
"89% of organizations that suffered a SaaS breach last year believe they had appropriate visibility at the time. And most of them had logs, but what they didn't have was alerting on what mattered."
Brandon SerratoOpening
"Hitting one SaaS tool gets you hundreds of victims, right? Hitting one enterprise network is going to get you one environment. If you're a financially motivated group, that's not even a decision anymore, right? You go where that multiplier is."
Brandon SerratoMid-episode
"You can't detect what you cannot see. And right now, most organizations have a blind spot specifically at the SaaS and cloud integration layer."
John DiljanClosing segment
"If your defenses stop at the systems your team deployed, Shiny Hunters already knows exactly where to start."
Brandon SerratoConclusion
"OAuth governance is not a question of whether logging is enabled or conditional access policies exist. It's whether your coverage actually reaches the places attackers are exploiting now."
Brandon SerratoFinal takeaway
Full Transcript
89% of organizations that suffered a SaaS breach last year believe they had appropriate visibility at the time. And most of them had logs, but what they didn't have was alerting on what mattered. Well, today we're digging into the breach that put that gap on display and what it actually takes to close it. Welcome to Shadow Talk, a cybersecurity podcast powered by ReliaQuest, the leader in agentic AI security operations. I'm Brandon Serrato, director of Gray Matter Operations. And I'm John Diljan, threat intelligence analyst. And today we're talking about why your cloud integrations are probably less secure than you think. So John, the Vercel story just broke on Sunday and it's all over the news. It's the same story we keep seeing with these third-party and supply chain risk incidents. Yeah, it really is, Brandon. And what strikes me about this one is that Vercel didn't get compromised in the traditional sense of phishing emails or vulnerability exploitation. Their infrastructure wasn't exploited for initial access, but rather through a third party. An employee signed up for a third-party AI productivity tool using their enterprise account, granted it broad permissions, and that tool then got compromised. And suddenly, Versal's internal data is on sale for $2 million on the dark web. And really, that high-level attack pattern of threat actors breaching widely used tools and achieving third-party access, it's the standard playbook for groups like Shiny Hunters. compromise one integration, and hundreds of downstream tenants are exposed. And the number from the sales loft drift breach, 700 organizations exposed through that one tool is what should make every security leader pause and ask, if a SaaS tool my team authorized gets hit or compromised tomorrow, would I even know? Yeah, and the tough answer to that question is that most don't. But the really wild thing here is that the logs exist for most organizations. The evidence is there, but the question is whether the detections are built around it. Yeah, and that gap between having some logging and having detections ready to fire on the right behavior before damage is done is where the most breaches go unnoticed. And that's exactly what we're getting into today. So, John, let's make sure our listeners understand exactly what happened here. Walk us through the Vercel incident to kick us off here first. Yeah, absolutely. So let me set things up for our listeners. So to start, if you don't know, Vercel is a cloud platform as a service or pass that's designed for front end developers to build, deploy and host websites and applications. And what makes a compromise like this particularly concerning is that the primary risk here isn't limited to the platform itself or Vercel itself. It's the potential for the downstream impact on third parties that are connected either directly or indirectly to Vercel. So the Vercel compromise likely started with a Luma Steeler infection on a Context.ai's employee's machine. And so for some more information, Context.ai is an AI workflow productivity tool. And one of its features is deep integration with Google Workspace on behalf of its users, including Vercel. So the info stealer harvested a Context.ai employee's corporate Google credentials, which then gave the attackers access to Context.ai's internal environment. And from there, the threat actors leveraged an OAuth integration between Context.ai into Vercel to pivot into Vercel's infrastructure. And a really important note here is that this was a legitimate trust relationship that was originally authorized by a Vercel employee. At no point did the attackers need to touch Vercel directly. So the attacker accessed customer environment variables inside Vercel's platform. And separately, a threat actor then appeared on breach forms claiming to sell stolen data, including Vercel's employee credentials, API keys, NPM and GitHub tokens, and their source code. That threat actor on breach forms claimed affiliation with the group Shiny Hunters and asked for a $2 million payment for the data. However, it's important to note that individuals of Shiny Hunters have publicly denied involvement in the attack. With that being said, tactics are still consistent with what we've seen Shiny Hunters do in documented previous attacks. For example, the recent compromise of Anodot, which was a SaaS application that helps monitor for outages. Yeah, and John, just to restate, you know, the real concern isn't what happened to Vercel. it's happening. It's what really happens next year. A compromise like this doesn't just stop at that vendor. It's cascading it down to their customers. And that's the same pattern we saw here with that antidote example that you just mentioned. Yeah, that's exactly right. And to bring it back to that antidote compromise, threat actors were able to steal sensitive data downstream from at least a dozen other organizations. And this even got to big name organizations like Rockstar, for example. And to that point too, you know, the playbook that we're outlining here is really becoming bread and butter for many groups like Shiny Hunters. So Brandon, I'll pass it over to you. Can you explain why this playbook of cloud applications has become a major target? Yeah, John. So really the economics have flipped and they flipped a long time ago, but, you know, hitting one SaaS tool gets you hundreds of victims, right? Hitting one enterprise network is going to get you one environment. If you're a financially motivated group, that's not even a decision anymore, right? You go where that multiplier is. And really the first driver that is you know causing all this is again that that that reach that these these threat actors have you know if you if you were to breach a widely used platform and you inherit every organization that essentially is connected to it, uh, steal the data, extort downstream victims, repeat. It's really a distribution channel and not necessarily a target. Yeah. And as we mentioned, you know, this is the shiny hunters playbook. For example, credential abuse at Snowflake, OAuth compromise at SalesLoft and Drift, and now OAuth supply chain at Vercel. You know, there's different mechanics each time, but it's the same group, the same thesis. Get into a tool that a lot of companies trust and steal downstream data. Exactly. And so now, like on that second point, right? So data exfiltration from a widely used platform or software is just easier than a full network compromise. And when I say easier, what I really mean is that this model is just more efficient to execute, right? Deploying ransomware on an enterprise is, you know, a dozen stage attack chain. I got to get initial access. I got to move laterally. I might have to steal some more credentials, maybe unhook an EDR, destroy backups, et cetera. You know, every stage is a place defenders can catch the adversary. Hitting a cloud tool collapses most of that. you know, logging in with stolen credentials and just pulling the data and the adversary is done. You know, that same outcome, a fraction of the work, a fraction of the risk is really what's appealing here. And so that's the attacker side, right? On the defender side, two gaps explains why we keep seeing this and why this keeps landing. And that first part is logging. And not in the way most people are thinking, it's not that these logs don't exist, right? It's that coverage is uneven. And that unevenness is what's creating false confidence. A security team has, say, their Microsoft 365 logs feeding into detection, alerts are tuned, content is well built. That source is essentially flawless. It's humming. But when senior leadership asks about visibility, related to these SaaS applications, the answer is, yeah, yeah, of course we have the logging, it's all there. And the problem with that is that the likes of a Salesforce tenant or the Workday instance, the 30 other SaaS applications and tools that the business is actually running on, those sources are either not connected, they're partially connected, or connected without any detection logic layered on top. And one strong source is carrying that whole picture. And that assumption is that the rest of the coverage looks the same. And in reality, it doesn't. The second part here is conditional access. And this is where the most people, I'd say, are oversimplifying it. It's not one policy. In a chain like this in particular, there's at least three distinct layers where scoped access policy can break the attack. You know, when first, right, the OAuth grant is made. Second, when that stolen token is later used. And then thirdly, you know, when that attacker pivots into internal systems. You know, each one lives in a different admin console, often configured at various points in time for, you know, various different reasons. And even mature programs rarely review them as one connected defense. And attackers know that. So you can have the tools, the policies, and the logs and still have a gap and attacker walks straight through. When we come back, what actually closes that gap? And the answer, is it more tools? Shadow Talk is brought to you by ReliaQuest, the global leader in AI cybersecurity. ReliaQuest helps enterprise cybersecurity teams contain threats in minutes with its agentic AI security operations platform, GrayMattle. ReliaQuest makes security possible for the most trusted enterprise brands in the world. Learn more at ReliaQuest.com. All righty, and we're back. So before the break, tools, policy, logs, you know, still being a gap. You know, we had mentioned that this still isn't enough. So, John, I'm curious for you, can you break down what's the actual visibility problem here? I feel like we've been talking around it, so let's get into the nitty gritty. Yes, certainly, and picking back up on your conversation before the break. So the issue here isn't really whether the logging exists. It's whether the right data is getting to the right place and whether anyone's built detections on top of it. And we see this across our customer base, right? So like thinking about that golden standard of a log source, usually Microsoft 365 comes to mind. It's usually the most mature, right? Visibility is generally strong. Detections are live and response windows are tight. It's not always complete, but it's definitely the closest that most organizations get. Then you start looking at tools like Salesforce, Workday and the rest of the SaaS estate. Again, the platforms that Shiny Hunter is actively targeting, and that's where the visibility drops off, which puts everyone involved in a tough spot. You know, we built the detections for these exact attack patterns. The content is tuned and the response actions are ready. But without the visibility available, none of it has anything to fire on. We nor our customers can operationalize what can't be seen. And this isn't us just making a case for ourselves. external research backs this up too. And I really want to draw our listeners in here to pay attention, you know, put down what you're working on the side for a moment and just really listen to these next few stats because they are really eye-opening to the problem we're trying to outline here. So 75% of organizations had a SaaS-related security incident last year. But the number that should really stop every security leader is that 89 of organizations that suffered a SaaS breach believe they had the appropriate visibility at the time So they thought they were covered but they weren And only 43 of organizations are doing continuous or near real SaaS monitoring The rest are running periodic audits, which really means that you find out about the breach on a schedule, not when it's actually happening. Yeah. And which is a problem, obviously, right? When the average breakout time, again, we talk about it almost every episode, but that window between initial access and an attacker actually moving laterally being 34 minutes. Yeah, that's a great point. And, you know, to be specific there, that's the window where detection and response is most critical. You know, exfiltration itself can happen in minutes. But really, once that data is leaving, you're in containment mode. The damage has already been done. So that real detection window is that breakout, you know, when the attacker is still pivoting from one system to the next. And that's exactly where this falls apart because it ties back to the main point you made earlier. So those controls and logs live in different admin consoles, right? Salesforce is in one place, Workday is in another, Microsoft 365 and a third. That's a heterogeneous enterprise, and it's not wrong. It's just how modern companies are built. But if you've got 34 minutes to stop lateral movement across all of them, you can't be logging into each console to piece the story together. Signals have to be correlated as they come in across sources in real time. Otherwise, that breakout happens and you're stuck writing the post-mortem. All right, Brandon, so passing it over to you. We mentioned that cloud visibility wasn't the only challenge for defenders here. Can you walk us through what other defenses can be put in place? Yeah, John. So I look at this and see three glaring recommendations that kind of come directly from the Vercel breach. because there were three distinct layers when looking at this where a properly configured policy could have potentially broken that chain. And so the first layer here, right, and it's the highest leverage one, is that the OAuth grant itself. Google Workspace admins can restrict which third-party apps users are allowed to authorize. A trusted app list, scope-based restrictions, admin approval required before any new grant goes through is critical. And in an environment running that policy, a third-party app can't receive a refresh token without admin sign-off. So no approval, no refresh token, and that chain never even starts. That's a native platform control. And again, it's no additional tooling required. And this is the layer that breaks the attack before any credentials are ever at risk in the first place. Most orgs know that control exists, but the honest answer or the reason as to why that this is still at place is the convenience factor for security. Every new OAuth grant is then becoming an IT ticket. Production teams are going to push back. And over time, that control gets loosened and sadly sometimes is turned off entirely. Now, the second layer here is where things get technically nuanced. And I want to be precise, though, right? Traditional conditional access was built for human sign-ins, right? Device trust, location checks, MFA, et cetera. But when an attacker is using a refresh token, you know, Context AI had been storing, they're not doing an interactive sign-on anymore. They're making backend token exchanges directly with Google. And most conditional access policies don't apply to that path the way that they would say, you know, for a interactive logon with a user logging in from a new laptop. But what does apply is a different class of control here. You know, scoped restrictions limit what the app can do, even if it gets compromised. You know, say a token, in this case from Context.ai, only gives the attacker the scopes the app originally had, no full account control, etc. App access policies lets admins disable the app across the tenant in one action. An anomaly detection on app behavior, say an app suddenly reading thousands of emails from a new ASN, can trigger and, you know, in this case, revoke the actual token itself. So the lesson here is that once the refresh token is out the door, you're not in this conditional access policy world anymore. You're in a detect and revoke type world. The third layer that I'm going to talk about here is Vercel's own identity provider, that single sign-on layer. So when the attacker pivoted from the compromised Google account into Vercel's internal systems, that pivot required an actual authentication event. And that's where full conditional access comes back into play here. So a conditional access policy at the IDP level, say, again, Okta, Entra, wherever single sign-on lives, could have required a managed device for an admin console access, set up through to have MFA before even reaching customer data, or say an IP allow list on internal dashboards. Any of those kind of breaks this pivot, even with a fully compromised Google account. So this is also where corporate versus BYOD matters most. If you're an admin console or if your admin consoles require, say, a managed compliant device, an attacker on their own infrastructure with valid credentials never gets in, no matter how good, again, the credentials that they've obtained are from the start. So let's close this out with what organizations actually do with everything we've just covered. And I want to start on the leadership side. Then, John, you know, we'll have you bring it home here from the practitioner level. For security leaders the headline from this episode is a question you need to answer not eventually but right now You know who controls which third party applications are authorized against your enterprise data Because if the answer is individual employees you have the same exposure that led to the Vercel breach here. Every OAuth grant to an enterprise account creates a new trust relationship. Refresh tokens, persistent API access, data flowing to a third party you didn't vet. Now, that's not a decision to leave with end users. It's a security decision, and it needs to be governed centrally. The second priority is treating OAuth governance as a formal category of risk. It's not a IT hygiene task anymore. Employees are connecting AI productivity tools to enterprise accounts faster than organizations can even track. And prohibition here is not the answer. That's just how you create shadow IT at scale. And what you need is a documented OAuth governance policy with four specific controls. An admin approval workflow, so no third-party app can receive an enterprise account grant without review. An allow list of pre-vetted apps for common workflows. Scope-based gating, so sensitive permissions like a mailbox read or drive access or directory access always triggers admin review regardless of the app. And periodic recertification so stale or abandoned grants get revoked. That policy doesn't require new technology. It requires a decision. So that morning action for the leaders listening, open your identity provider's app-governed settings. Google Workspace, Microsoft Entra, or whatever platform fronts your enterprise accounts and pull the list of third-party OAuth applications currently authorized. Then ask one question. How many of those grants were issued directly by end users without admin review? Every one of those is an uncontrolled trust relationship with your data. That's your starting point. Those are some great recommendations, Brandon, for the security leaders. Let me bring it home for the practitioners here. So really the core issue is this. You can't detect what you cannot see. And right now, most organizations have a blind spot specifically at the SaaS and cloud integration layer. So the telemetry that would have surfaced this type of compromise, things like API call volumes, consent events, data access attempts, you know, it's technically available in most environments. But the question is whether it's operationalized. And often it's not turned on, it's not retained long enough, or it's scattered across admin consoles that don't talk to each other. Again, collection isn't the issue, correlation is. So it's not a tooling failure, it's a coverage failure. And really that's exactly what that 75% SaaS incident rate is telling us. There's one more piece worth flagging here, especially as attackers start running parts of this playbook with AI assistance. So enumeration and lateral movement are compressing. And the reconnaissance phase that used to take hours or days now happens in minutes because the attacker has an AI model enumerating resources in parallel and deciding what to pull next. So what this really means is if your detection rules were tuned on a dwell time or velocity thresholds thresholds from incidents a few years ago, they may not fire on what the current threat actually looks like today. So it's worth revisiting detection thresholds on three things specifically. The first being how fast unique resources are being enumerated inside a single session. Second, how quickly failed access attempts start converting to successful ones, which is a signature of automated probing. And then finally, how much variety in data types shows up in a short time window. These are all signals of AI accelerated attacks that they leave, and they don't look like the attacks are rules were originally written against. So your morning action for practitioners, what you can do tomorrow morning, go to your highest value SaaS platforms admin console. Again, those tools like Salesforce, Workday, or any platform that hosts sensitive business data. You need to confirm that the audit trail is active, retained, and being surfaced somewhere your team actively reviews them. Your SIM, your SOAR, your data lake, monitoring platform, it really doesn't matter where it lands as long as it lands somewhere connected to detections. Then check whether you have active alerts built around new OAuth grants, access attempts from unfamiliar sources, and anomalous data retrievals. If any of those are a no, then you have your starting point. Awesome. I love it, John. So really, the bottom line from everything we cover today, SaaS security isn't a question of whether logging is enabled or conditional access policies exist. It's whether your coverage actually reaches the places attackers are exploiting now. The third-party apps, the OAuth grants, the integrations nobody formally signed off on. If your defenses stop at the systems your team deployed, Shiny Hunters already knows exactly where to start. That's all for this week. Thanks again for listening to another episode of Shadow Talk. If you'd like to get in touch or just an email away, you can send your questions over to shadow talk at relioquest.com please don't forget to subscribe so you can get next week's episode delivered direct to your podcasting platform of choice and we really appreciate it if you can rate and review shadow talk wherever you listen we'll make a huge difference and help us reach new listeners we'll be back next week with another episode of shadow talk Thank you.