What the Hack?

Episode 238: The Phone Call that Broke the Bank

41 min
Feb 10, 20262 months ago
Listen to Episode
Summary

This episode examines the September 2023 MGM Resorts cyber attack and subsequent Caesars Entertainment breach, analyzing how human risk and social engineering—not sophisticated hacking—enabled attackers to compromise major casino operations. Guest Charlotte Jupp from OutThink discusses how organizations can manage human cyber risk through education, policy collaboration, and behavioral monitoring rather than surveillance.

Insights
  • 60-90% of breaches originate through human elements, making social engineering and credential compromise the primary attack vector rather than technical exploits
  • The MGM attack succeeded through a simple phone call to the help desk requesting a password reset, demonstrating that security failures cascade from ordinary human moments rather than catastrophic technical failures
  • Organizations must balance security policies with operational reality—employees circumvent overly restrictive policies to do their jobs, creating unintended security gaps
  • AI amplifies social engineering effectiveness by enabling attackers to craft highly personalized pretexts using publicly available information from LinkedIn and other sources
  • Security culture requires two-way dialogue between security teams and employees rather than top-down policy enforcement to identify and fix systemic vulnerabilities
Trends
Ransomware attacks increasingly target operational technology and identity systems rather than data vaults, causing business interruption losses exceeding direct ransom demandsAI-powered social engineering and deepfake voice/video attacks are making help desk and password reset procedures vulnerable despite being routine security processesPublic data aggregation (LinkedIn, people search sites, genealogical databases) enables threat actors to build detailed targeting profiles without direct reconnaissanceRegulatory and financial consequences of breaches are expanding—SEC disclosures, FBI involvement, FTC investigations, and lawsuits now follow major incidentsOrganizations are investing tens of millions in post-breach security improvements, but these investments receive minimal public communication due to reputational concernsSIM swapping and phone-based identity compromise are evolving attack techniques that precede and enable larger enterprise breachesEmployees using unauthorized AI tools (ChatGPT vs. corporate Copilot) create uncontrolled data exposure risks that traditional security tools struggle to addressZero-trust architecture alone is insufficient—attackers are increasingly skilled at answering verification questions and defeating authentication challenges
Topics
Human Risk Management in CybersecuritySocial Engineering and Pretexting AttacksHelp Desk Security and Password Reset VulnerabilitiesAI-Enabled Threat Actor CapabilitiesRansomware vs. Business Interruption CostsIdentity and Access Management FailuresSecurity Awareness Training EffectivenessLateral Movement and Privilege EscalationData Exfiltration and Unauthorized Tool UsageCISO Risk Prioritization and DashboardsRegulatory Compliance and SEC DisclosuresPersonal Data Removal and Privacy ProtectionSIM Swapping and Phone-Based AttacksEncryption Policy Enforcement vs. UsabilityBehavioral Monitoring Without Surveillance
Companies
MGM Resorts
Primary case study: suffered $100M loss from September 2023 cyber attack initiated via help desk phone call; refused ...
Caesars Entertainment
Secondary case study: breached days after MGM; paid $15M ransom to restore systems with minimal customer-facing disru...
OutThink
AI-driven cybersecurity company focused on human risk management; Charlotte Jupp is VP of Customer Success
LinkedIn
Discussed as source of publicly available employee information (roles, connections, company details) used for threat ...
OpenAI
ChatGPT mentioned as unauthorized AI tool employees use to process sensitive company data outside approved corporate ...
Microsoft
Copilot referenced as corporate-licensed AI alternative to ChatGPT with better data security and licensing controls
People
Beau Friedlander
Host of What the Hack podcast; frames cybersecurity discussion and conducts interview with Charlotte Jupp
Charlotte Jupp
VP of Customer Success at OutThink; primary guest discussing human risk management, security culture, and behavioral ...
Chris Tarbell
Former FBI agent famous for busting Silk Road; cited as expert on SIM swapping and social engineering attack techniques
Chris Krebs
CBS News cybersecurity expert quoted on American company vulnerability and need for muscular approach to cyber defense
Quotes
"between 60% and 90% of breaches start through the human element. It's unfortunately a poor person who usually lets the attacker in to your network"
Charlotte Jupp
"If our security systems only work when we all behave perfectly, is that really security?"
Beau Friedlander
"The initial access came from a phone call that looked from the help desk perspective like a routine support request"
Beau Friedlander
"People aren't doing this maliciously. They have to share this data with customers somehow and give us the tools to do that"
Charlotte Jupp
"We've been in a half decade or so now of very disruptive and in some cases destructive ransomware and cyber criminal attacks that we need a more muscular approach"
Chris Krebs
Full Transcript
In September 2023, one of the most tightly controlled environments on Earth stopped working. It wasn't a power outage. We're not talking about a power plant either. It wasn't a fire. It was a cyber attack. Now, what's so secure, right? What's the most secure place on Earth? If it's not a bank? Las Vegas is still under a cyber attack. This is day six, and here is everything you need to know and are not being told. Guests were locked out of their rooms. Hotel key cards failed. Slot machines went dark. Systems have still not recovered at MGM properties in Las Vegas, but the hotels are busier than ever. If you're anything like me, the first thing I think about when a casino is attacked is Oceans 11. But it was the usual suspects. It was a cyber attack. We've seen other large scale attacks. Tonight, Riviera Beach, Florida is the latest city to pay hackers who took over its computer system. A warning tonight from federal investigators that hospitals are being targeted by hackers launching ransomware attacks. It's been more than three weeks since MGM Resorts was hit with that massive cyber attack. Now as the company works to return to normal, it sent more information out today explaining what happened and detailing the steps it's taking to get back to normal. So we know this is the new normal. What should we be worried about? Well, we should be worried about our data being out there because not just our data, my data, your data, everyone's data, because everything can be used. I'm Beau Friedlander, and this is What the Hack, the show that asks, in a world where your data is everywhere, how do you stay safe online? You see the cases all the time. I think the big MGM was a few years ago in Las Vegas. This is Charlotte Jump. I had a friend who was staying at the time. He couldn't watch the television in his room because it's all connected to the internet. Charlotte knows a lot about this story because she had a friend who was involved in it. She knows a lot about cyber. But let's stick with the story. He couldn't get in the door because it's key cards. They couldn't take payment. And the ramifications and implications are so big in terms of what it can do to your business that everyone can become a target now. Charlotte Jupp is the VP of Customer Success at Outthink, an AI-driven cybersecurity company focused on human risk management. And we're going to be talking about human risk today. So with that in mind, what is cyber risk management and human cyber risk management in plain English? Great questions. Cyber risk management would be, from a business perspective, understanding where you have different cyber risks within your business and taking different steps to try and reduce those, with the overall aim of being stopping a breach from happening. between 60% and 90% of breaches start through the human element. It's unfortunately a poor person who usually lets the attacker in to your network, whether that's your own private account, being an individual, whether it's being into a business. It would be someone perhaps clicking on a phishing link or giving away credentials, falling for something like a deepfake scam. She works with global organizations and CISOs to understand how real people interact with real security systems, not pie-in-the-sky hypothetical perfect things, but what we actually encounter in real life, and how to reduce risk without blaming employees. This is all about creating a stronger stack. What's a stack? A stack, think of it as like Swiss cheese. One slice of Swiss cheese, you can see right through at least parts of it, right? Put another piece of cheese in front of it and turn it, and you will be able to see through less of it. And if you keep stacking those slices of cheese, you will eventually have something you can't see through. That's the goal. Now, the goal is a noble one and it's important, but the fact is the way that most compromises still occur, not most, well, most actually, is through a fallible human being. So that's why we talk about human risk today and what it means. And so it's really important to understand what the susceptibilities are for you as a person, what you're susceptible to, and trying to take steps to mitigate those susceptibilities. MGM isn't just a hotel operator. It's a casino company. Casinos are a very particular kind of business, right? They're not just about hospitality. A ton of places do that. They're about continuous, regulated financial transactions. They're 24-hour banks. Every slot machine, pull, tap, button push. Every table bet, every loyalty card swipe, every comp is tracked in real-time. Identity, access, money are all tightly linked. And they're all supposed to work without interruption. And by the way, those cameras that are everywhere, they're not just to see if you're cheating. It's also, you gotta think that there's some facial recognition happening there. Now, you gotta think doesn't mean you gotta think, but I gotta think. That's what I think. I asked how long is this going to be? This will be one day, it could be three weeks. I don't even want to pay with my card right now. I'm scared that they're going to hack all of our information. When those systems go offline, it's not just an inconvenience for guests. It creates immediate operational, regulatory, and financial problems. They operate under strict gaming regulations, and they rely on precise accounting to prove that games are fair, payouts are accurate, and money is moving the way it's supposed to. You know, which is why I talked about Ocean 11 earlier. But there are other ways to hurt a casino than breaking into a vault. And the MGM Compromise is a great example because they lost an estimated 100 million as a result of their incident, largely as a result of business interruption. What this incident exposed wasn't just how a breach can happen, but how fragile even highly controlled environments can be when identity and access systems fail, as they will. I think there are certain targets who perhaps would be targeted first. Charlotte Jump. Where it's easier access to some of the data that you might be wanting to, like the VPs. The attackers didn't start by going after MGM CEO, right? That would be a waste of time or a lot of time invested for questionable results. They didn't try to guess a master password or break into a single high value account right away. Instead, they went after something far more accessible. Anyone could be a target because you can start knowing that this person is connected to this person or this person is connected to this person. You can see that on LinkedIn. And if you want to make that one jump to try and get through someone to get to someone else, compromising their account and using that account to then maybe get to the more sensitive data or the more sensitive person that you want access to. A group of teenagers hacking casinos. This group, they didn't start with casinos. Chris Tarbell is a former FBI agent famous for busting the Silk Road Dark Web Exchange. According to CNN, they cut their teeth on something called SIM swapping, which we've talked a lot about on this show. And that's when you try to duplicate someone's phone, which requires a social engineering of a person working at a cell phone store who has a great power, access to all your records. And so they can convince that person that you lost your cell phone and I need a new SIM to put in my phone. And then I have complete control of my target's phone. I have a duplicate copy of it. You know, a lot of social energy comes down to pressure and putting pressure on people. You know, oh, I really need to get this done. It's Friday afternoon. I got to get it to get out of here. I'm trying to get home. And then they set their sights on something bigger. So it sounds like they found maybe a number of victims through LinkedIn. They looked on LinkedIn and said, hey, this guy works at MGM. He's in the technical side, which makes sense because, you know, if I'm looking to hack into a network, I'm going to go after. I want a technical and I want to I want to use your name and password and a two factor authentication. of someone who has, you know, power within the system. Okay. Within minutes of the call, the real employee received a notification that their password had been reset. By the time they reported it to IT, the attackers had already gotten in. Is this human risk a new idea? We all know it's not. We know human beings are the weakest link. Why isn't traditional security awareness training enough anymore when it comes to human risk. I mean, it used to be that you could sit through some modules and be like, okay, I get it. I understand what a phishing email looks like. Thank you. I understand the threats out there. Thank you. No, no, please. I'm good. I don't want to watch another video. But we're not there anymore, are we? These are becoming increasingly sophisticated. There's a lot of information about everybody available on the internet these days. So it's very easy for someone who is a hacker or a threat actor can send you something that is really targeted to entice you to click We been doing security awareness training as 20 years but PeopleFactor is still the main way in Once they had access to a legitimate employee account they could begin moving laterally Using that account to learn more about the network at MGM, request additional access and impersonate other users. Now, this works better when you're at a bigger place. People aren't going out of their way to cause risks to their organizations. They're under pressure at work. The goals that you're delivering on tend to be day-to-day, not cybersecurity goals. You'll think about your business goals. And if there's a cybersecurity policy in place that might slow you down, you might look for ways to go around it. This kind of movement is exactly what modern enterprise systems are designed to allow because they need to because there's so many people using them. Employees are expected to collaborate. They have to share tools, right? Access multiple systems to figure things out, to make sure. I don't know. They all have to look at who's ordering what and where it's going. It makes the day-to-day work at a place like this possible. And it's also what allows attackers to quietly escalate privileges because there's so many people in there once they're inside. In other words, the breach at MGM didn't hinge on a single catastrophic failure. It happened as a result of a systemic failure. And that systemic failure is believing that human risk is not the most important factor at a large organization. When ransomware did enter the picture, you know, it was already way too late. The initial access came from a phone call that looked from the help desk perspective like a routine support request. And with AI, that routine request is going to look a heck of a lot more believable sound. It's going to be harder to see through, hear through, whatever. So that's the real starting point of the MGM attack, right? Not a dramatic break-in, but a moment where trust was granted because the system requires that to work, even if it's a zero trust environment because zero trust just means that we're going to ask some questions and hackers are increasingly good at answering them. Listen, we all know this is the new normal. If you listen to this podcast, you know it's the new normal. And nothing that we just described is unusual. And nobody deserves to be pilloried or pointed at as having been especially this or that in terms of cybersecurity. Because there is no this or that in terms of cybersecurity when it comes to human risk. Help desks exist everywhere. Password resets happen every day without incident. People use LinkedIn because they're supposed to. If this, you know, exploit worked at MGM, it works anywhere. Losing big this morning, Caesars, the casino giant, confirming its fallen victim to a massive cyber attack. On September 15th, just days after the MGM compromise, Caesars confirmed it had suffered a major cyber attack exposing sensitive customer data. With hackers breaching their firewalls, they say the digital crooks hijacked critical data, including Social Security and driver's license numbers, for customers who signed up for its loyalty program. Unlike MGM, Caesars made a different call. They called the criminals. According to Bloomberg, Caesars Entertainment was forced to pay about $15 million in ransom to restore its systems. Caesars actually chose to pay a ransom. The Caesars disruption? Surprise, surprise, was far less visible to customers. There was no widespread outages. Does that make it right? I don't know. We just gave a lot of incentive to the criminals. MGM refused to pay. Does that make them right? They lost $100 million. I mean, on paper, it makes more sense to pay to play, but I don't think it solves a problem. By the time a company is choosing between those two options, the damage is already done. In fact, the damage was already done before anything happened. The damage is just out there waiting to find a place to land. The breach has already happened, okay? Access has already been lost. CBS News cybersecurity expert Chris Krebs says American companies are too vulnerable to these kinds of attacks. We've been in a half decade or so now of very disruptive and in some cases destructive ransomware and cyber criminal attacks that we need a more muscular approach. There's no good options here. We just we have to understand that our information is part of the cybersecurity stack now. And so we're picking between bad options. And so what are the good options? The good options aren't made in a vacuum. They're made in an existing system. and they get made by a small group of people inside a company whose job it is to understand their own risk, prioritize the things that matter and decide what the organization can live with and what it can't risk-wise. The person at a company responsible for all this is a chief information security officer and their job is to keep the company safe. It's head of security is another way of putting it. But head of security could also include physical security. And a CISO is really just about keeping cybersecurity incidents low or non-existent. There's no such thing as non-existent in the land of cybersecurity. CISOs tend to have lots of risk across their business and you do not have the time, the money, staff to be able to remediate every risk. So it's helping you understand what are your most business critical risks that you do want to do something about so that your company's not on the front page of the newspaper as best as you can try to stop that. And that's across all of security, not just human risk. So then in human risk, it would be surfacing to the sea. Tell me about these dashboards, typically. What do they look like? And for people who don't know, and they're listening, and they want to know why they're listening, what are they looking at? Is it like the deck of the Star Trek ship where you see all these lights flashing, or is it simpler than that? In some cases, it could be like that, But in a lot of cases, it can be simpler than that. The whole point is to be able to communicate a clear and concise picture and help give advice as to, well, what are the remediation steps? Show the risk and say, well, so what? What do we do? What comes next? That could be thinking about human risk. As I mentioned, it's combining data that you're getting back from your people about their knowledge, their completion, their training, their phishing stats. And you could have individual dashboards for every campaign that you run, which will give you just siloed data that talks about what you've learned from that particular phishing campaign or training campaign. But then the real power is when you start to bring that data together and kind of consolidation allows you to identify pockets of higher risk. So bringing that data together, perhaps with information about the person behind it. So do they have access to sensitive data? Are they an IT admin with privileged access? Perhaps they work on a device that isn't up to the latest patching standard or isn't up to the latest configurations. And so helping you surface data that relates to an end user, but also applies to wider security decisions that you might need to make. So would you prioritize certain devices for patching as a result of, you know, these people high risk and therefore put them in a top tier? Could it be like the example I gave earlier that you identify where an additional security tool might be a good way to minimize the risk, like something like a password manager? But passwords aren't the only risk. So here's the uncomfortable question. If humans are the main way in, if people are the biggest risk, doesn't that mean the obvious solution is to monitor them more closely? Big Brother style. How do you measure human cyber risk without engaging in surveillance and crossing that line? yeah that's another great question so i think a lot of it shouldn't be seen as surveillance necessarily one one is education it's allowing actually having having that two-way conversation rather than pushing policy down down the chain onto employees is listening saying well if you aren't going to follow the policy tell us why and help us understand why and we can think about can we change the policy if it's super restrictive now you can't always do that obviously policies are there to protect organizations but equally understand can we do the policy in a better way so as an example working with customers we found out one customer was requiring that all of the data that left their organization going to customers had to be encrypted by an encryption tool that they used however through running training and learning and allowing as part of that training process people to give their thoughts They found a vast number of their employees telling them well actually we don encrypt They were putting their hand up and saying we don use a tool that you telling us to use to send data to customers because the customer can't decrypt it. So obviously it's worthless to us and we need to send the data to the customers. This is our job. So we just don't encrypt it like you tell us to. And by doing that, that's exposed them something that they never really realized before, that a lot of the business just weren't using this tool. And the reason they weren't using it is it wasn't fit for purpose. And so then they could change their approach of how they were sharing that confidential information. So they were still making sure they had an encryption policy around, but not using a tool that meant that it was worthless to the employees because ultimately they still had to do their job. They're not doing this maliciously. They have to share this data with customers somehow and give us the tools to do that. And so it's exposing, I think, things like that, making it more into a conversation and not something where we're here to watch you and push policy on you that you can't follow. If you give people the opportunity to feed into that policy, I think it builds better culture overall where you're all in it together and you're all working towards that same goal. Now, the other thing that's true is protecting people from outside threats is important. And the best example that I can think of at the moment, there's a million things that could go wrong. An employee is working on something, they're using intellectual property, or they're using something that the company does not want the world to see. And they're using, maybe the company has gotten everyone a seat on an AI LLM, and they're using that LLM, and they don't like the answer. they cut and paste the prompt and text it to themselves and try it on a different LLM that they use on their own. Now, that different LLM that they're using on their own, is it in incognito mode? Is it allowing the AI to train on the data? Is your phone safe? does a hacker have access to your phone number because it's on a people search site how do you start to solve for the idiosyncrasies of human involvement when it comes to cyber risk yeah i mean that's a great question and so it's challenging um and exactly as you say people aren't necessarily doing something malicious there they're just thinking oh i I usually use ChatGPT, work for making me use Copilot. Perhaps let me stick the same question to my ChatGPT because I know I'll get a better output. And that's something we need to constantly make users aware of. It's not just the fact that you help educate them to understand first that the tools that you're using within your workspace, perhaps, are locked down. You're paying for license. Your company is paying for licenses. It could be that you pay for a license at home for certain tools, but your company will be paying for a license, which means that your data is more secure if you're putting it into that. It's not going to be shared and it's not going to be used for modeling in the same way. One way that you can look at kind of educating people is through that adaptive security. So understanding, monitoring if people are using tools outside of those which are approved by the business. So like the example I gave, perhaps the business has a co-pilot version, a license, but you're actually using ChatGPT. There are tools within the organization that will allow the company to see that you've potentially accessed ChatGPT, that you're putting data into and running queries through that product and nudging people in that instance, maybe through a little notification that pops up. That could be through Teams, it could be through Slack saying, oh, do you know you're using this? is just the behavior and reminding people that perhaps that's not secure. So trying to influence and kind of jump in at the point that action is being taken should have much more of an impact. It engages that person. They're doing something at that point in time. It makes them stop and think if they're doing the right thing in that moment. And if you have that learning in the moment of actually making the behavior, it usually is more impactful on you at that point in time. Well, I mean, it could be like getting your hand slapped as you're reaching into a cookie jar, But so are you saying, for our listeners out there, I think she just said that CISOs might have visibility into the fact that you did share something that has company information in it to an outside LLM. Talk about that. Yeah, and that's interesting, I think, sometimes. So obviously it depends on where you work and in your organization. But there are lots of security tools that sit there to protect you and to protect your company. that could be to protect obviously people getting in who shouldn't be getting in but it's also making sure data doesn't exit an organization where perhaps again intentionally non-intentionally another example could be you accidentally put two different email addresses of completely different organizations in an email that you're writing and there is technology that can sit there that can warn you before after you've pressed send but before the email sends to say hang on a did you actually mean to mix up these domain names in the emails that you're sending? And similarly, there's web proxy tools that will be monitoring what web traffic is going on, what you are browsing on the internet, and alerts can be configured. So in certain cases, websites can be blocked. You might work in a company where certain websites are blocked and you can't access them on your web device. There are others that might not be blocked, but where alerts can be triggered to give warnings that certain types of activity is occurring. Well, so, you know, short of watching every single, being omniscient and watching every single thing that every single person does, how do you distinguish between risky behavior and normal human behavior? I mean, how do we start to separate the not great from the fine? And I think that would be combinations. So you're not necessarily looking at one. action in silo where you could but not normally it'd be looking at for those patterns and and those could come from many different types of behaviors you could see that perhaps let me take a selection you could be someone who is clicking a lot on phishing simulation links phishing simulations are sent out by organizations to try and train you and help educate you in in the techniques that threat actors are using to get you to click. It could be you are someone who clicks a lot. You could also see from your training that you do, the security awareness training that you're offered, perhaps you don't complete it or you do complete it, but you click through to get to the end as quickly as possible. So you're someone whose engagement was low. So you haven't necessarily taken the time to educate yourself when it offered, and then you make the mistake of clicking in phishing simulations. Additionally, then, if perhaps certain behaviors such as using LLMs, which aren't authorized, or maybe you are offered a password manager, but you choose not to use it, so you can see that that usage isn't happening, you're someone who perhaps is promoting yourself. And maybe, worst case, you are someone with privileged credentials to a really business-critical system application that you're using, or maybe you are in the finance team and have access to lots of sensitive data. The after effects, the shockwaves after an attack, they don't just, there's not just one and done, right? They just keep coming for a while. And they become part of your world. You know, Caesar said that they had taken care of it. Don't worry, we took care of it. But they couldn't guarantee it. They said they couldn't guarantee that all the data that was potentially exposed was completely not exposed. MGM spent days getting back to normal. Days. I mean, remember, it was in the news for quite a while. Both incidents had a consequence, which is SEC disclosures became mandatory at that point. And that included FBI involvement right away. State regulators started asking way more questions. The FTC wanted information about data security practices from MGM and probably everyone else. And lawsuits followed, of course. And there was cyber insurance, but there's no way it paid for everything, at least with MGM. It's doubtful. Maybe they insured a lot. I don't know. I guess, you know, casinos know how to gamble. So, but so do insurance companies. So I'm not sure about that. But after the dust settled MGM announced they were going to spend tens of millions of dollars in new security and investments changes to access authentication I don't know if they did anything about, you know, protecting personal information. I hope they did. And none of that made headlines, because why would it? Because if you're like, hey, everybody, this is how we're protecting ourselves. It's just not how it works. Now, we're not talking about a one-off failure, right? It was a stress test that showed how modern companies behave once identity systems fail. And if you want to look at it through that lens, and I encourage you to, the whole thing, everything about our lives today is a stress test. A stress test for how we stay safe. Humans who don't work in cybersecurity aren't meant to be thinking about this every single day of the week. We, as cyber teams do, and we have to help and protect those people to make sure that those people are protected and that they can go about their day to day and function. So how do we make sure that we stop those mistakes having bigger impacts if we can? All right. So if you zoom out from MGM and Caesars for a second, there's a bigger pattern here. Both of these attacks didn't start with some secret internal system. They started with information that was already out in the world. Names, roles at the company, relationships, who they knew, family, a lot of context. Things that weren't classified, things that were in fact online for free. You could Google it. Things that were just there. The same dynamics that made MGM vulnerable, those do not stop at the edge of a company. They show up in our individual lives. They show up where we work. They show up everywhere. So how easy is it for someone else to build a picture of you? Who are you? What you do? Who you're connected to without ever interacting with you directly? if you go and look for me online, and this has been open challenge to anyone listening to the show, you're not going to find out very much. Not even where I lived in 1999. I'm not there. And if you really want to figure it out, you're going to have to do some digging on those genealogical sites, but you're going to have a hard time because it's all been wiped out. And it's part of my own cybersecurity protocol to keep myself safe so that I'm harder to target. We just have been doing shows recently on the scam compounds in Southeast Asia. Now, those are the ones that you see as a text in your phone saying, are we playing tennis today? Or, hey, I wondered if you were around. Or, are we having dinner? And you answer. Do you answer? I don't answer. Now, and I'm not mean to them either because those are people who don't have a choice. They're human trafficked and they're in scam compounds. And so I just don't answer. You do you. That's what I do. Thinking about your privacy and what data is available for you online. And it could start even thinking about maybe types of social media where you might overshare what you're, you know, you have your breakfast, you have the dinner. Simple things like LinkedIn. Most people will have their professional profiles profiles on LinkedIn, their last however many jobs, what their role encompassed, and maybe even just one sentence about what they did in that role. So being able to understand then, you offer up that information, a threat actor can start to see, well, what type of data you might have access to, what type of tools you might have access to. If they could see you're in finance, you might have logins to the payment systems. They would expect you to have certain, and they'd also expect they can learn information about the company that you work for. So your company will probably have a LinkedIn site, which will promote things to make your company interesting, attract people to work at the company, obviously talk about the initiatives that you're running. Pulling together all of this data, which with a hacker having AI, it's now really easy for them to do. It's not like they're manually sitting there looking at every person. They're pulling all of this together could mean that they can have a very targeted interview whether that be and in most cases in business it would be through your email but like you said if they know that you're on a help desk we're seeing a huge proliferation now of attacks where people will phone the help desk and try to get passwords reset perhaps for someone really you know think about the vip in the business you phone up pretending to be the vip get your password reset by the help desk because you're deepfaking that help desk and then you've got the keys to the kingdom perhaps. AI does facilitate creating the story that gets someone in the door now. And AI pretexting, which is social engineering pretexting, which is just to do your research, take all of that research, put it into an LLM. I'm not giving instructions. Don't do this. and asking the LLM, what's the best way to get so-and-so at such-and-such a company to tell me this? What's this real risk we should be worrying about? Not hackers are getting smarter. They aren't, but AI is making it seem like that. The risk is that we've built systems where ordinary human moments, like an off-guard moment, working a help desk, a password reset, or a public profile that has a little too much information, it can cascade into massive problems. It's not Ocean's Eleven. I've said that. It's not Fort Knox. It's not an attack on Fort Knox. It's probably harder to hit than Fort Knox. Maybe not. There's more artillery at Fort Knox. It's a phone call away, though. And frankly, a phone call made possible by information we freely share online with companies that we do business with and more. If our security systems only work when we all behave perfectly, is that really security? Charlotte Jupp, thank you so much for joining us. Charlotte Jupp is the VP of Customer Success at OutThink. and really grateful you could join us this week. Thank you very much. Really enjoyed speaking with you. Okay, it's time for the tinfoil swan, our paranoid takeaway to keep you safe on and offline. Now, if you're listening to this and you're like, oh, there goes Bo again, trying to sell me, delete me. Think again, because I'm not. Everything that you need to do, you can do. And if you don't believe me, go on Reddit and look at some of the comments that people have to say about personal information removal companies. Now, it's true. That said, I don't care how you do it, but you need to do it. If your information is out there online and you are concerned about cybersecurity, you're sitting there every day with a layer of stress you don't need. and it's one that you can easily deal with yourself. Not easily, but you can just actually go on Delete Me site and use the DIY pages to figure out how to remove yourself from all of the different brokers. Or you can use a personal information removal service that hits 750 places and does custom removals and everything else. And I know this is starting to sound like an ad, so I'm going to shut up, but do it. Because it is a layer of the cybersecurity puzzle that is very easy to solve for and very important. And that's it. That's your tinfoil swan. I hope you're not mad at me for talking so much about our core competency, but there you go. That's it. Thanks for listening. See you next week. What the Hack is produced by Bo Friedlander, that's me, and Andrew Steven, who also edits the show. What the Hack is brought to you by DeleteMe. DeleteMe makes it quick and easy and safe to remove your personal data online and was recently named the number one pick by a New York Times wire cutter for personal information removal. You can learn more about Delete Me if you go to joindeliteme.com slash WTH. That's joindeliteme.com slash WTH. And if you sign up there on that landing page, you will get a 20% discount. I kid you not, a 20% discount. So yes, color me fishing, but it's worth it. you're listening now.