Cybersecurity Today Weekend: Deepfakes, the Death of Truth, and Verifying AI in the Enterprise
70 min
•Apr 25, 20263 days agoSummary
Host Jim Love interviews Rob Gross, co-founder of Cifro, about the erosion of truth in the digital age, deepfakes, and AI security risks in enterprises. The discussion explores how AI systems are fundamentally changing cybersecurity paradigms, requiring verification layers and cultural shifts rather than traditional rule-based security approaches.
Insights
- Traditional rule-based cybersecurity ('tick box culture') is obsolete for AI systems due to their inherent unpredictability and non-deterministic behavior
- Enterprise AI adoption is outpacing security controls; companies enable AI tools without understanding actual usage patterns or security implications
- The biggest cybersecurity threat isn't technical vulnerabilities but the erosion of trust and truth in society, making credential theft and social engineering more effective
- AI verification and monitoring must be built from the foundation; retrofitting security onto deployed systems is significantly harder and less effective
- Company culture and employee trust are critical to security; punitive approaches to mistakes drive risky behavior underground rather than preventing incidents
Trends
Agentic AI systems rapidly proliferating with minimal security guardrails; enterprises deploying autonomous agents without identity verification or access controlsDeepfake and synthetic media technology reaching photorealism; traditional visual/audio verification becoming unreliable for content authenticationCISOs increasingly responsible for enterprise AI governance; security teams forced to secure systems already deployed by business unitsShadow AI adoption accelerating; employees using personal accounts for Claude, ChatGPT, and other LLMs outside corporate controlsAI security becoming table-stakes differentiator; vendors claiming AI security solutions but lacking genuine verification capabilitiesModel Container Protocol (MCP) and similar connection layers creating new attack surfaces; security community recognizing these as connection layers, not security layersShift from preventive to detective security models; enterprises moving toward real-time monitoring and verification rather than access denialGenerational skill atrophy risk; developers and professionals over-relying on AI tools, losing foundational technical competenciesIdentity verification for AI systems emerging as critical control; treating AI agents like contractors requiring background checks and credentialsOptimism about AI productivity gains balanced against realistic security risks; forward-thinking leaders seeking innovation without recklessness
Topics
Deepfakes and synthetic media authenticationAI verification and monitoring layersAgentic AI security and autonomous agentsEnterprise AI governance and CISO responsibilitiesShadow IT and unauthorized AI tool adoptionModel Container Protocol (MCP) securityTrust and verification in AI systemsCredential theft and identity-based attacksAI-powered social engineering and jailbreakingRule-based vs. risk-based security approachesEmployee security training and incident reporting cultureAI skill development and technical competency maintenanceStartup security practices and rapid deployment risksIdentity management for AI applicationsTruth verification in digital content
Companies
Cifro
Rob Gross's stealth-stage startup building AI verification layer for enterprises to monitor and understand AI usage
FakeSpot
Rob Gross's previous company acquired by Mozilla; built AI models to detect fake reviews and fraudulent sellers on e-...
Mozilla
Acquired FakeSpot; integrated its fake review detection technology into browser extensions
Amazon
Major e-commerce platform discussed as source of fake review and counterfeit problems; praised for recent improvement...
OpenAI
Creator of ChatGPT and Sora; mentioned for rapid AI development and recent decision to restrict deepfake generation i...
Anthropic
Creator of Claude; discussed for security-first approach, MCP protocol, and internal documentation leak revealing cyb...
Slack
SaaS platform example of centralized infrastructure that can be patched and secured at platform level
Microsoft
Creator of Teams; mentioned alongside Slack as widely-deployed communication platform
Walmart
E-commerce platform covered by FakeSpot for fake review and counterfeit detection
Best Buy
E-commerce platform covered by FakeSpot for fake review and counterfeit detection
Shopify
E-commerce platform with significant counterfeit and fraud problems addressed by FakeSpot
CrowdStrike
Security vendor that took over entire block at RSA conference; mentioned as example of major security company presence
SailPoint
Identity management company; CEO participated in founder-based discussion at RSA
Codex
AI coding platform; head of security discussed AI security challenges at RSA
Fortune
Publication that reported on Anthropic's leaked internal documentation regarding model cybersecurity risks
People
Jim Love
Host of Cybersecurity Today podcast; leads discussion on deepfakes, truth, and AI security
Rob Gross
Guest discussing AI verification layer, FakeSpot background, and enterprise AI security challenges
Sayyid
Rob's co-founder at both FakeSpot and Cifro; 15+ years AI/ML and cybersecurity experience
Quotes
"The biggest danger we face in cybersecurity, in my opinion, is the death of truth and trust in society, in business, and in cybersecurity."
Jim Love•Early in episode
"You wouldn't let a random stranger get credentials to your company, right? Like you would verify who they are. With AI, with agents, we are allowing basically unidentified things into our network."
Rob Gross•Mid-episode
"If you don't put in trust and safety in the foundation, you've already lost it. It's hard to go back."
Rob Gross•Mid-episode
"Technical is the cost of admission. You have to be good at the technical part of it. But if you don't understand it's cultural, you're not going to succeed."
Jim Love•Late in episode
"I think AI is going to enable all of us to actually not need to work as hard, but be actually more innovative. We're going to be building new skill sets that are going to be awesome for the future."
Rob Gross•Closing segment
Full Transcript
Welcome to Cybersecurity Today on the Weekend. I'm your host, Jim Love, and I have an interview that came up based on some questions that have been troubling me, and I presume they've been troubling a lot of you. And the question is, are we witnessing the death of truth? And what the hell does that mean? Before we get to that, we'd like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless, and cellular in one integrated solution that's built for performance and scale. You can find them at meter.com slash CST. I don't want to get into a political discussion, but truth be told, I'm a politics junkie as well as a tech junkie and a science junkie. and one of the things that I've been amazed at in politics particularly was how effectively video could be used to keep track of someone and their political promises and over the past years it's been if somebody took a particular position and gave an answer you didn't just read an article somebody would have a video of them and bang they would nail it and say this is what you said then this is what you're saying now. Social media posts are like this as well, and many of those featured videos. But what happens when you can fake these videos so that they look absolutely realistic? This week, a political group issued a deep fake video of a politician. I'm not going to argue the merits or the morality of it, but they used social media posts and they added additional commentary. And it has to be said, they arranged how the person appeared to deliver this. There's a big difference between saying something with a smile, saying something with a sneer. In other words, they controlled how that person appeared, but it looked exactly like him. Now, what's the big news about this? People have been showing off deep fakes for months. The fact is, these have gotten to be really good. They aren't perfect yet, but they're pretty damn good. And I'm not sure how many people would spot the differences. I caught this one because I watched the mannerisms of people and I know how they speak. And if you do an audio podcast, you can hear a voice and you know whether it's natural. But I've also seen a video of a famous person that I knew well and followed. And it took me like two to three minutes to figure out that this was a fake. It wasn't that I knew the person's vocabulary so well. It wasn't the picture. It wasn't the voice that would have given it away. There were no obvious mispronunciations. The tonality and the flow of the voice were correct. And since it was just a talking head video, it's hard to say if the mannerisms would have given it away. But I just knew there was something wrong. And I went with the feeling. YouTube is awash with these now. And I walked away thinking, have we reached the death of truth? And this isn't just politics. It's in all aspects of content. In the world of social media. You depend on trusting people more than you do networks or channels or shows. But what happens when you can't trust the people, when you can't trust your own eyes, when you can't trust your own ears? How do you know what's accurate? And why does this matter for cybersecurity? Well, it bleeds over in a number of ways. We like to think in terms of technical weaknesses that make the news. In other words, the clever hacker who finds that missing comma in the syntax or the buffer overflow or all of those sorts of things. Most hacks occur because people steal the credentials and or the identity of someone who has access. That doesn't make as good a story, but it's what happens most of the time. And people are often fooled into giving up their credentials, whether these be passwords or session tokens, doesn't matter. The biggest danger we face in cybersecurity, in my opinion, is the death of truth and trust in society, in business, and in cybersecurity. So when I got the opportunity to interview Rob Gross, I was keen to do it. Rob has an experience from this area that comes from a number of vantage points. Welcome, Rob. Hi, Jim. How are you doing? Thanks for having me. Good. So just a little background to establish who you are and what you're doing. You were a lawyer at one point. Yeah, so my actual background is I have training as a CPA and an attorney. I was able to make a shift into the tech world about, I would say, about 15 years ago because I've always had a deep love of tech. As a kid I grew up, I was a huge video game player. What I wanted to do with my career was actually become a software engineer making video games. That was my passion. My parents actually said, we don't think there's a future in that. We think you should go and be an accountant. That's a degree that will always be in value and you should become a CPA. You should learn business first. And they have subsequently apologized for that because obviously the video game industry is massive. Some games out there are grossing billions of dollars in sales. But I always had a deep passion for technology. My first computer was a Commodore 64 when I was five years old. So it was always something I was very interested in. early in my career, I started off as a technology consultant, helping companies select ERP systems. This was after the death of the internet, right? It was dot-com bust. Of course, all companies, all enterprises were like, oh, the internet's over. Or they basically didn't trust connecting their systems to the net. And they wanted to do everything client-side and install SAP, Oracle databases. So that's where I first started actually getting back into when I was first out of college. But then subsequently, I went to law school. And then I ended up going to a startup in the city. There was a marketing tech startup. And eventually from there, I met my co-founder, Sayud, and we started FakeSpot together. FakeSpot's whole premise was protecting people from e-commerce fraud. We realized that in the market, there was a huge problem with fake reviews, third-party sellers, specifically on Amazon. We also covered other platforms such as Walmart and Best Buy. Shopify sites had tons of issues with counterfeit problems and fakes. But we were really proud of the work we did there because what we did at FakeSpot was we took our background in building models and building the first artificial intelligence, right? Classifiers, small language models. This was before even Transformers. Taking that technology, packaging it up in a very sophisticated way and giving it away to consumers for free via an extension. Our Chrome extension was very popular. And we ended up growing FakeSpot to around 3 million users worldwide before being acquired by Mozilla. So I've had a very diverse journey throughout my career. But the most exciting thing is now Sayyid and I have been together for close to 10 years and we're working on our next journey building Cifro. And you pronounce it Cifro. So what does Cifro do? So right now, we've come out of stealth as a company, but our product is still in stealth. In a nutshell, we're building an AI verification layer. What we saw in the market was, And this comes from experience of building models. Sayud and our data science team has a lot of experience in this. 15 years, over 15 years experience working with artificial intelligence, but also experience working in cybersecurity. And from working with these models, we noticed that they are making a lot of mistakes. There's a lot of security problems with them, too. And we said, OK, if we know about this, what do attackers know about it? What are they looking for? How are they utilizing these models? And how now can we protect people and get people, like you said before, get people to the truth, right? You're using an LLM. You're using a popular service like ChatGPT or Claude. All right. Can you trust what's actually coming out of there? You gave it a prompt. What is it returning? Can you trust your interactions with it? Is it actually doing what you told it to do? And this goes to the agent side, right? The agents are the big buzz now. I just got back from RSA. where everyone's talking about protecting agents, protecting enterprise AI, but do really people understand what protection means? So that's what we're focusing on at Cifro, is providing enterprises that core ground truth that you know exactly what these systems are doing, because at the end of the day, they are systems, what these systems are doing and how they're affecting your enterprise from a productivity standpoint and from a security posture standpoint. I want to ask you a question, then we'll get back on track because it just has been bugging me and you'll know this better than anybody. Agents have been around for a while, but they really exploded only in the past like 8 to 12 weeks. Absolutely. Yeah. And so when you were planning Cifro, were you thinking about agents? We're thinking about everything. So I think it's good to have it's really difficult for people to keep up with this for everyone, all of us to keep up with this. It feels like almost every other day, there's a new model released, there's a new platform released. I know that the team at Anthropic has been just rapid fire with product development and product releases, not just models, but product releases. But when you break it down into a couple of things, it becomes easier to understand. Because one of the things is a big buzzword at RSA was we've got to secure the MCPs, right? But do people even understand what the MCPs are doing? And then it's just, it can be overwhelming for everybody. So when you look at it and you talk about agents, right? What is an agent? It's easy to break it down into. You have a couple of categories. The first category is what everyone knows about, which is your Claude and your ChatGPT. We like to call them, some people call them chatbots. I don't like to call them chatbots. They're more answer engines, right? These are LLMs that you can have a conversation with and they will give you answers. They're used for productivity and the like, right? And then on the other end, you have actual models. You have LLMs, you have SLMs, right? They're used by data science teams to build internal products, right? So if you're a large enterprise, you have a data science team that probably is building a model that they want to self-host because they're tired of paying tokens to the big boys, right? At OpenAI and Anthropic. And then you have your agents, right? You have your agents that are going out there and doing actions on your behalf. So this is like a Claude code. This is a codex. This is a cursor. We consider those to be coding agents. And then you have a whole nother class of things that have come out like OpenClaw. These are personal agents. They're doing actions on your behalf and you're giving them access, which I think is crazy. You're giving them access to your entire computer. And you're saying, based on what you can see, please go out and do these following things for me. And then we have our classic. We can't forget this. are classic chatbot that's been on every page, on every consumer page, helping out people book reservations, answer questions about product. And those chatbots too, they become basically, AI is now in them. For RSA, I actually used a leading platform chatbot. It's now it's AI chatbot to book an entire trip. So it's just the landscape is vast, but when you break it down in those components that makes it easier to understand. Yeah. The reason I ask about agents is because I thought people would have to like, when you prepare for one of these shows, you don't just say next week, we're going to go to RSA. This is a huge, massive amount of preparation, getting your marketing team together, getting all the material, getting your presentations or getting all the things you're going to do. And I had some poor guy who sit there going, wow, I really want to talk about zero trust because I've got the latest thing in zero trust. And the marketing team looks at him and says, you got eight weeks. Find out about agentic AI. So that's the thing about, so the way that we approach it, we're a growing team, right? So we don't have the marketing arm. We go out to RS, we go out to these conferences to have frank conversations with people, right? Just sit down and talk to them. We don't have a booth. We actually just, like you said, be prepared. Yes, we are prepared to have these conversations. But we more just want to go out there and talk to people and understand what their problems are. Like, what are you actually facing out there? Because you said there's a lot of buzzwords. When I was walking around the floor of RSA, everybody said they had a solution for AI security. Everyone said they had a solution for Gentic security, right? Zero trust, zero trust. One of the big themes there, though, that I found really fascinating, I went to the Purple Book, one of the Purple Books events on Monday. And it was about identity, right? It was about establishing identity for all of these AI applications. Now, that's not necessarily what we do at Cifro, but I just found the conversation very interesting because one of the CISOs there said it was actually perfect, right? You wouldn't let a random stranger get credentials to your company, right? Like you would verify who they are. And at Cifro, that's our motto. It's trust, but verify. So he said, we always have a policy at our company, trust, but verify. So if it's going to be a contractor, a new employee, right, there's background checks done. Then after the background check, you're going to make sure they have a company issued laptop. You're going to give them a token, right, to access your systems. With AI, with agents, we are allowing basically unidentified things into our network. And he said, that's the most scary thing is that you wouldn't do this with a human, but some teams in companies are running way too fast and are getting into trouble because now they've let an autonomous agent go out there and do actions on their behalf, but they don't know exactly what it is or what it's doing. Yeah. And I would say, I think that one of the enemies that we face is our own success at cybersecurity. That, I mean, what I call disparagingly the tick box culture, but other people might call rules-based cybersecurity. In other words, we've got all these rules. If we follow all these rules, if we do these things, if we go through all these lists, if we do all these things and we get certified, we're going to be safe. And that just blew up. If it ever worked, it's just blown away. Why? These things are not, they don't respond in the same way. They're not a rule-based system. That might work for algorithms to some extent, But in the world of AI, you have a degree of unpredictability. The human model you're talking about probably makes a lot more sense. Correct. Personally, I would say that's what we should have done with cybersecurity in the first place, but we're forced to actually go there now. So as you look at this, do you get the sense that people are getting that realization? They're getting the realization really fast. So there was a lot of things that happened during RSA that were not outside of the conference, and it started waking people up. There was a couple of attacks on popular AI platforms. There's been various other attacks. There was actually one released today. I don't know if I want it. It was not verified yet, so I don't want to talk about it. But it started waking people up to the reality that, wow, these applications might necessarily not be as secure as we thought. I do know that Fortune reported today about Anthropics latest model and internal documentation was leaked. I don't know by who, but they said that this model poses one of the world's largest cybersecurity risks they've ever seen. So this is a new reality that the traditional guardrails will not work. So some CISO put it to me best this way. When we finally got the quote was we finally got control of the machines of all the boxes. Right. We were able to protect and harden everything because it was a SaaS-based infrastructure, right? So a good example is Slack, right? Everyone has Slack, everyone has Teams. So they put Slack in Teams throughout the company. And when something goes wrong, Slack or Microsoft would patch it, right? So then we distribute the patch, we fix the problem, we're okay, we can sleep at night. But then came the issue of, okay, so we have someone in marketing using Cloud Code. and we didn't know about it. Why are they using Cloud Code? Because marketers now can actually effectively use Cloud Code or use Cloud or ChatGPT to do growth marketing, but they're using it on their personal account and we didn't authorize it. So those guardrails, whether put in by security teams, put in by CTO, the CIO, using large security companies didn't work in that sense because they were able to go around them and start using these platforms. And like I said, at the same time, policies are policies. right it said if you tick the box yes i read the policy right it's not enough anymore because these systems ai systems are constantly evolving systems they're not static sass platforms and that's really where a big risk is for a lot of companies yeah i mean it's if i would i was a cio at one point in the we used to worry about shadow it it the shadow it i was worried about It was nothing compared with this. And that's my position has been and people can argue with me on this. If I was going to design the most fundamentally insecure model for a computing platform A would be it Yes Yes And it and this is a disclosed thing through Anthropic but we had they didn mention who what country it was but there was a country that used they think it was a country's intelligence community that used quad code to go out and infiltrate 30 organizations, including government organizations. And this news went under the radar, but the team in Anthropic caught it and shut it down. But this is how sophisticated attackers are getting. They can automate this entire process. And with that attack, they got clawed code. They were able to jailbreak clawed code and get it to think it was the world's best pen tester and had permission to do this. And they went to those 30 orgs and they got into some of them, automated. And that's just one example. Yeah. But in some of these things, the big secret of how you bust through an AI, not just clouded or open AI, you ask it three times. it's a and even anthropic when people came out and said mcp was insecure they went yeah it is yeah duh you thought it was going to be a security layer it's a connection layer you have to harden the security around it so it's there's these misunderstandings of because if you actually shipped a piece of code like mcp with no security on it and they've said that five years ago, you'd have gotten killed. Yeah. So it's a different concept. And I'm not faulting Anthropic for this. That's not what they're sold. Their security team does a wonderful job. I have to say, they're really thinking multiple steps ahead of a lot of other model companies. And OpenAI is making steps in that direction, too. They're really thinking way ahead of the risks of these models. They've done a lot of great stuff on the trust and safety side. My co-founder and I always say this, is that if you don't put in trust and safety in the foundation, you've already lost it. It's hard to go back. And I know that the team at Anthropic has been thinking about this very deeply from day one. Yeah, but we've never done it. We developed, launched the technology. We should build in security from the ground up now. and now now it's too because that's just a so that's just a that's just a kind of a feature of silicon valley right of the startup world it's about going as fast as you possibly can capture build the product release it who cares release it if it's got bugs in it release another one capture the market get to your series a get to your series b it's all about speed and when you know this when you go too fast you make mistakes it's just it's inherent in what in that and when When you go too slow, you don't get funded. If you go too slow, there could be other players in the market that move faster. You're correct. They take your funding. They end up passing you. So it's not necessarily a flaw of tech startups, but it's kind of like a feature that you have to move fast. But it's always best to stick. Are we moving too fast? Slow down. Where do we have problems? Let's fix them before they become bigger problems. Yeah. Well, the nice thing is for entrepreneurs like you, it creates an opportunity to contribute, to help improve the security and add new ideas. And that's a... Oh, no, that's what we love about this. We sold FakeSpot. And it's one of those things people say, why are you doing this again? And I say, I'm doing it again because I love building products that help people. That was what was inherent about FakeSpot. And that's what makes Saeed and I differ. FakeSpot came from a personal problem that Saeed had on Amazon. He actually got ripped off on Amazon. And so did I. And when I found out what he was building, I said, this is actually a really cool mission. I want to join you on it. And that kind of honesty we built into our product and we built into our marketing. We always, I made a personal mission. I responded to every single customer on our contact email because we had people that said, you stopped me from buying something bad. And I would say, look, I really appreciate the feedback. Can you give me any feedback on the product? And we actually built our product not from typical growth hacking feedback mechanisms. We built it from real direct consumer feedback. We wanted to be honest about helping people. And that really helped us grow. We didn't have Facebook ads. We didn't have Google ads. We had just out there word of mouth from our own users that we were helping them. And with Cifro, we're doing the same. We're honest about what we can do when we speak to our customers. We're honest about what we can do and what we can't do. And we find that approach really helps in the enterprise world because there's unfortunately a lot of companies out there that tend to oversell their capabilities. And then they get in trouble when the platform goes live and it's not working in certain areas that they said it could. We're all about, with Cifro, just we were a fake spot about transparency and building that trust with our customers. Just going back to fake spot for a minute. So you find that you're getting ripped off or people are getting ripped off. And then you started to try to find ways to find the truth in or to find which reviews are accurate. Yes. Now, obviously, there's some algorithmic things you do or things like that. But what was the guiding star for you in trying to figure out what was true and what wasn't? Yeah. So if you look at if you look at online reviews and this is this has been a problem for a very long time. fake reviews. So at its core, you would have a small business and you ask your friends and family, look, I got a small business, sales are struggling. Can you go out there and just write some reviews for me and help me get up the page rank for Google? So that's like a small business level. When people see a small business, if it doesn't have good reviews, Google may not rank it high. When you go to search for, say, I don't know, I'm in New Jersey, a bagel store. We'll use the Jersey analogy, a bagel shop, right? What's the best bagel shop around here? The best one is in Montreal, but that's another story. I've been there. I bought a shirt from there, actually. My wife said, why did you buy this shirt? I said, because it's the best bagel shop in Montreal, and Montreal is famous for their bagels. She said, you have to get rid of this shirt. I go, absolutely not. I actually end up hiding the shirt because I use it. I wear it to the gym. I love wearing that shirt. It's great. The bagels up there are fantastic. You're right. There are some good spots in New Jersey, though. Yes, there are. I appreciate you recognizing that because Montreal is definitely amazing. I worked out of Jersey for a while, so our head office was there for a while. Nice. Yeah. In New Jersey, I like to say if your town has, if you're even in a small town, you'll probably have three or four bagel shops and probably one gas station. It's crazy. But yeah, anyway, so getting back to the reviews part. So you want to try to boost up your ranking. So that's on a small level, right? That happens. But unfortunately, with a lot of e-commerce platforms, it happens at a much larger and a more nefarious way. Like on Amazon, there is a lot of competition out there to sell the same product. If you were to look up backpack, you would get backpacks from well-known brands. I'm just saying in general, make it. I don't know if L.L. Bean's on there, but you get L.L. Bean, you get Jansport, you get Osprey. And then all of a sudden, you'd see these other brands that are like, wow, that's actually 50% less, why does it have 20,000 reviews? And then the stats on this going back, I'm trying to think back into our fake spot data, but the stats are pretty much for every purchase, it's like either one in five or one in 10 people will leave a review. And you can do the math backwards of what that product sales are. So if you're like, wait a second, this backpack has 20,000 reviews, that means they've had 200,000 sales of this one backpack. That's almost not possible. And what you end up finding out is that a lot of these sellers will give away product for free. And if you give away product for free under law and you ask someone to leave a review, they have to disclose that they were given the product for free. And it happens a lot on these platforms. Amazon has done a good job of this in recent years, trying to clean up the review platform. I have to give them a lot of credit. They've also done a good job on counterfeits, and they've done a good job on third-party sellers, where a lot of these problems come from. But they still have issues. All these platforms have issues with people skirting around these rules. And the penalties are really severe if you skirt around the rules. Amazon has its own program, Amazon Vine. And there was another one too that actually discloses we were given the product away. We were given the product for free. But a lot of guys go around and break those rules. And that's what FakeSpot was trying to catch, trying to find the unreliable third-party sellers that were breaking these rules. So you actually knew this is an honest seller. These are honest reviews and you can actually trust them. We were never trying to make a product recommendation. We were trying to guide people to the reviews that you can actually trust because we end up verifying that they were real. And the reason I ask all that is because there really is a there's the idea of how we think through this problem of trust. And I think it lies at the basis of this. So you've got this one idea of going back in, and I think that's actually a great piece of learning in there is, Hey, is, would this make sense? If you step back from it, get out of your system, one mind, get into your system to mind and look at it and say, mathematically, would that make sense? Yeah. Those are key things. And they really are that step back. Something I always tell people in cybersecurity is if it feels it's bad touch, good touch. If it feels bad, stop. Nothing in the world needs to be done that quickly. You have to use your breath and ask yourself, does this make any sense? Yeah, you're right. It's the gut check, right? It's the gut check. A lot of things in our world now have us programmed and wired to move fast and not think through things. on the e-commerce side, Amazon was the best at this and they still are the best. Their mobile app gets you to go get it today. Like it's always focused on you can get in a couple hours. You can get it today. And what does that deliver? If you place this order by now, it'll be it. If you place it in the next 15 seconds, it will be there. We'll send it to you yesterday. Right. So all kind of gamified to that. And this happens to in the if you notice this in travel to in travel, they'll say, hurry up. There's only one left and there's always more left. So it's important, I think, for everybody, whether you're shopping online or whether you're doing something at a company to slow down and say, I got to verify this really quick. Let me verify it and make sure that this is actually on e-commerce. It's the real deal. I'm getting an actual deal. The product's going to be good. The seller is good. And then when you're working at any enterprise, we're getting any company. Is this answer correct? Imagine you're working with Claude. And we've all seen this, where I've seen an answer come from Claude or for ChatGPT. And based on my experience in this area, whether it's legal, it's accounting, I know it's wrong. And I'll say, excuse me, but did you consider this? Did you consider that? And it will go, you know what, Rob, you're right. I didn't. So that's what happens when you actually slow down. There's too much of, hey, Claude gave me the answer. ChatGPT gave me the answer. I'm going to share it with my boss. I'm going to share it with my team and not actually reading it. One thing we do at Cifro that we're really proud of is we make everyone review their work. And why do we do this? It's we're using AI tools, right? We're an AI shop. But I think it's important for people to maintain their skills no matter what they are. So in my background, I'm maintaining my skills in operations, right? Running all operations, maintaining them on a product sense and a marketing sense, leveraging these new tools, but always verifying the work and making sure that it's correct. And the same thing goes for what we do on the engineering side. We want to level up our engineer skills. We're always proud of how we train our engineers, but we also don't want them to lose their skills. And that's happening way too much in the world today, just in general. Yeah, and I struggle with it myself, but because I write like I write constantly I'm writing news stories I'm writing all kinds of things and I use AI tools because I can't spell anymore yep and I'm not so sure that I haven't traded part of that that spelling ability and the ability because the old days used to as an editor you'd do copy editing and I was much better so you but so you watch your skills atrophy because you can depend on them it wasn't just AI we had spell check and all those things before oh yeah But you'll find out you'll just give that away if you don't exercise it. And so now I spend a lot of time trying to make sure that I do exercise and read and focus on some things myself. But how do you make that part of your company culture? From the beginning, we established that Fridays are AI-free days. We want to make sure that everyone takes time to review their work. We want to make sure everyone takes time to do something that does not involve artificial intelligence. because it's very hard to disconnect from this now, right? It's extremely difficult. So it's not mandatory for the entire team, but we as founders, we make it a priority to disconnect from that, right? And because there's plenty of stuff that you can still do without AI. I'll give you an example, right? AI is still bad in accounting. It's not the best, right? You need someone who knows what they're doing to look at what needs to be done. So maybe my Friday will be, that will be the day that I do the accounting, right? but also to writing. I love writing too. Sayu loves writing. We don't use AI to write our blog posts, anything we're doing on LinkedIn. And you know what we noticed? It stands out so much better because it's actually genuine. And people realize that you're not getting, you're not getting, you see it on LinkedIn, all the emojis, all the M dashes, right? It's like you're trying to connect with people, but you're using an artificial intelligence system to connect with them. how about you actually write the way that you write it's okay if you have a couple the grammar is not a hundred percent right i know this with one thing driving me nuts is the new gmail right in gmail it will suggest the entire response right and it comes on it almost says to me listen you suck at email why don't you just leave this to me what i noticed is it's not the way that I write. It's not the way that I talk to people. It's in a very robotic tone. The response will be like, hi, Jim. Thank you for having me on the podcast. Hopefully, we can talk soon. Thank you, Rob. I don't say thank you in my emails. I say best. It doesn't even know that. I do not let AI do my emails because I've seen too many of them come out looking robotic. Also, I've seen too many people sending them to me. And it's just, you're not connecting with people. You got to be able to connect with people on a personal level. So those AI-free days allow our team to disconnect and maintain their skills, whether it's you're in engineering, whether you're in product management, marketing, sales, you got to maintain those skills. That's a good idea. You've done the reverse Google, spent a portion of your time out of technology, which I think is actually a really good idea. Yeah, it is. We also are very, we emphasize, we emphasize at Cifro, time to connect with your family, time to connect with your friends, go outside, take a walk, exercise. That's really important for us. Way too many of us are just plugged in nonstop. And we find that if you're constantly plugged in, you're probably not going to like what you're doing anymore. You're going to be missing out on different events with your friends, your family. And we always say, if you need to take time for that. Please go and do that because that ends up, you end up bringing more to your work because you're happier. And I don't necessarily agree with what became really popular during last year. It was 996, right? Don't get me wrong. As founders, we work very hard, right? But it's also important for, I think, for everyone to step away from their work and spend time with their friends and family so that you actually are more productive doing what you're doing at Cifro. Yeah. And I think just on the personal note, I'm obsessive. I can't. I don't work like other people do. I'm writing a book right now. I'll finish this. I will be up till four in the morning writing. I will keep working at that. My wife looks at me and says, how long does it take other people to produce a book? And I say, they work days. And so I'm obsessive. And the problem we have is if people like me run companies is that you can start to expect that everybody else lives their life that way and it's really destructive yeah no it's it and it's in it i'm the same jim i'm the same way i always it's true if you actually love what you do it's not considered you don't feel like it's work so you're able to stay up late continue building continue working i always like to say this though at a certain point the quality of work goes down it's It's just a, it's just a, it's just a, it's whining. Yeah. Yeah It just the quality of work goes on And that when you that when you know that when you know to call it But if you you passionate about something it doesn feel like work And that the way we always felt at FakeSpot And that the way we have that same exact feeling at Cifro And if you're an entrepreneur, right, you're a founder, and you don't have that feeling when you first start working with your product, probably switch to a different product. Yeah. Because you have to have that kind of love and passion for what you're building. If not, every hour is going to feel painful. every pitch, sales to investors, it's just going to, it's not going to come off natural because you don't believe in what you're building. Yeah. Somebody once told me that Steve Jobs was, may have been a bear as a boss. He might've been a lot of things that people didn't like. I think Wozniak once said that Steve could have been nicer and gotten the same result. But the one thing that I just, that just grabbed me was somebody actually saying this of him holding up the phone one time saying, I can't fall in love with this. How can we expect a customer to do that? That part of it, and from people who had been beat up by jobs, a lot of them walked away with some wisdom, and that's one of them. So if you can't fall in love with it, and you're just trading time for money, you're not going to excite a customer. I love that example because it's so true. We were FakeSpot users. We built it for our own. When you build things for your own problem, that means you've actually done something that's very useful probably for others. And if you love your own product, it's easier to go out there and explain it to people. And if you have to, sell it to people. At RSA, I ran into a couple people who were sales guys, and they were saying, Oh, I worked for this company, that company, and it was the worst product, but I still had to sell it. Now, those are probably the best salespeople because they can actually sell something that doesn't work. But you could see in their eyes. It was painful doing that. But then when you get to a good company that actually has a really great product and the founders and the executive team believe in it and constantly work on it, it makes your life so much easier as a salesperson. So that Steve Jobs analogy is fantastic. Yeah, I admire people who fall in love with what they do and can develop that. I have trouble with people who think they're going to sell something to you. And that's always a problem. And my audience does as well. And they're probably actually, the audience is probably sitting there and going, why are you guys talking about this sort of stuff? This is cybersecurity. But the issue with me and the reason why I love this conversation is cybersecurity is cultural. Yes, it is. It is not technical. Technical is the cost of admission. You have to be good at the technical part of it. And if you're not, then go do something else. but if you don't understand it's cultural and you have to reach people and find ways to get them to think the same way that you need them to think to be safe and that will be different for different people if you can't get a hold of that it's why we why i think people burn out and why they're sitting in a room going those damn users they just don't understand they don't do anything yeah i started as a stand-up comedian believe it or not and because you'd think i'd be funnier i I actually did. And I came out of that going, you can't tell an audience to laugh. Correct. You have to get them there. And it's the same thing in cybersecurity. Jim, I was just going to say this. One thing I love about the cyber community is it's even though there's a lot of large companies in it, right? There's a lot of big players. What you notice at RSA is it's a very tight knit community and you have to connect with people. I always say this on a personal level, understand, don't just out there selling the solution, right? Understand what are their problems they're facing. Let them just tell you, right? Let them unload about all the issues and really understand them on that personal level of this is what I'm facing here, I'm facing there. But also to the knowledge share. At this Purple Book event, you had all these top CISOs talking about all the problems they're facing and having frank conversations about it. and then what they were doing to actually solve it. And so people were exchanging ideas. You wouldn't get that in a lot of other industries. People would be like, I'm not going to talk about what we're doing. That's a competitive, that's a trade secret, this, that. No, it's a community where people have to discuss ideas, have to pass around information. Because if you don't, you're not protecting the world as a whole, kind of. Yeah. If you're in a company that is not selling a security product, obviously, and security is your differentiator, you've got a big problem. Because, you know, if you're out so far ahead of everybody else with your great security knowledge, chances are you're overestimating your ability. Correct. And I found the same thing, that CISOs are willing to share a lot more privately, mind you, and I think that's a good thing. Yes. And rarely publicly. So tell me about your new thing and how that relates to to how you've made the transition from the idea and truth. And Cifro, now, what does it do? Yeah, so Jim, we're keeping what the product does. We're keeping what we do in stealth. But in general, like I said before, we're building this AI verification layer that's going to basically allow companies to finally get a real-time picture of what's going on with AI at their company. And that's basically its core functionality. When we're building our experience, from building artificial intelligence experience, from building models for the past, like I would say close to 15 years at this stage, knowing that, knowing basically the ins and outs of what these models can do, what they can't do, their limitations and their problems. The approach that we're taking is we're trying to work with a bunch of partners at the onset to really show them. You may think you have an understanding of what's going on at your company from a security standpoint or just a basically what's called an AI usage standpoint. But when you actually put in the Cifro platform, it gives you a true picture of what's going on. And I think that's very important in differentiating from other solutions out there. They may be capturing information, capturing interactions, but not really giving that complete picture. We're talking about from main platforms to model development to the smallest interaction, right? What is actually going on at your company when it comes to AI usage and how can you start to get control of it? because I think that should be the starting point for a lot of enterprises. You necessarily don't want to go and say, I think a lot of companies, what we've heard in the market is a lot of companies are saying, all right, we have to enable AI at our company or we're going to fall behind. We have to do something about this. But then it falls on and that's usually the CEO, right? Talking to the market, whether it's the public market or it's a private company, we have to put these things in there. So they go out. Maybe the first step is, we usually see this. The first step is they go out and they get a Claude Enterprise account. And they say to everyone in the company, start using Claude for your work. But they don't understand how people are actually using it for their work. And then they'll go to the CTO, the CIO, and the CISO and go, now it's your job to secure it. But it's already out there. We already bought it. So there's a lot of pressure on the tech suite, the tech executive suite, to actually go out there and not only secure it, but understand what's going on. So that's what we're really trying to do with Cifro on the onset is we're going to give you that picture and that landscape of what's going on. And then we can decide what to do with it down the line because you do not want to just block employees. I think it's the worst thing you can do because there's a lot of innovations in AI that are making employees into basically like Superman almost, right? Things that would take hours and hours to do can now take five minutes. So you want to enable those employees to actually be more productive and not just say, we're locking everything down and we're blocking it. Yeah. The old thing of the doctor, no, doesn't work, first of all. The other thing is you outsmart yourself. I had a CEO one time who looked at me and this is the early days of PCs and I wouldn't take Solitaire off the desks. And he told me to. And I said, I'm not going to do it, Bernard. And he said, and I was just a director at that point. I wasn't an AVP or anything in those days. So I'm talking to the CEO. I said, I'm not going to do it. And he looked at me and went, what? I said, you want me to take that game off those computers? Now I'm going to have to hire somebody to go around and train everybody to use a mouse. They're doing it for free. And he looked at me and went, oh. And this part of it, we don't think through the unintended consequences of what we do. And that causes us great grief, both in corporate life. I love the solitaire example because I grew up at a time where I remember the Mosaic browser. I remember the beginnings of the Internet. I'm a prodigy dial-up kid. And I do remember on my parents' PC playing hours of solitaire. But I wouldn't do it for hours on street. But I used it as a mental break when I was studying in high school. Yeah. And I'm sure companies too, if you give employees a mental break and you say, we're not going to block things as solid member minesweeper, we're not going to block these things because, Hey, we know you use them. They're not a threat. We don't want you spending all day on them, but a lot of people use it as a mental break, right? To actually, and then they're more, more productive at work. And I also love that example because I worked for a large accounting firm and they locked down everything. You could only go to CNN.com. And I look at when people are, they can't be browsing. You can't be browsing. I think I worked there. Oh, no, they did. You know what they did? They blocked CNN.com. And then, of course, they had their internal dashboard for doing research. So all the employees were in cubicles just going and doing tax research. That was what I was doing all day long. And you could see people getting sadder as the day went on because they couldn't have that mental break of just going to like ESPN.com and reading about their favorite team. So you don't want to lock people down. People at the end of the day, they have interests. They want to take a mental break. And it's not a good thing just to block them from everything. So I love that solitary example. No, it's it. And it is true. It does. It makes space. And I think that's something we have. We need to think about in cybersecurity, especially when things get tough. The in I put this in my novel, but it was when some when a problem can't be solved, walk away from it for a little while. And I think that's a tough thing to do, especially when you're under attack. Yes. But you do reach this level of which you need to give the brain a break. You need to break off and start to reformulate or whatever happens that helps free us up for thinking. But it's and that's hard to do when you're under pressure. So it's interesting from an AI perspective, too. That's what we we talk to prospective customers about this. The big thing they say is we want to keep enabling employees to use this, but we're scared to do it. We're scared that they're going to make a mistake. They're going to do something like upload a file. They're going to copy something they shouldn't copy into a clock. And it's interesting because you want them to be able to leverage those tools because when they leverage those tools for, I would call, just repetitive tasks, mundane tasks that used to take hours on end. Now their creative thinking is enabled. They have more time to think on, wait a second, I actually came up with a better idea for our core product. I realized that I know how to talk to this customer now and sell them on this thing we have. The different strategy for the company marketing wise, it's enabling people that it's freeing up their creative time. Now, if they're using it nonstop and like I said, not taking that AI free day to actually do those things, maybe it's some aspects you're automating yourself out and you don't want to do that. But it's all about enabling these things in companies and doing it safely. So now they can actually work on more important things like strategy, new product development, sales, marketing, things like that actually help increase the bottom line. Yeah. But if I'm a CISO, I'm looking at the two of us and going, that's easy for you to say. I've got documents to protect. I've got things that I have to be. And boy, oh, boy, I will tell you, if it breaks down, somebody's going to come to me and say, who brought in that AI? And it ain't going to be the CEO that takes the hit for it. It'll be me that allowed it in. Yeah, and that's what we're seeing more than ever now is that, and they were talking about this at RSA also, but we saw it beforehand where the implementation of these AI systems is falling on a lot of the time now on the CISOs. And because they're saying, look, you have the technical knowledge and the security background to tell us the risk behind this because legal and procurement too will come to them and say, should we buy this? Should we implement this? and it can get overwhelming because they already have so much on their plates. So you're 100% right about that. Yeah, what would you say to them? I've tried to think through this because your usual stuff of saying, and my take on it when I had to do with security was never technical risk, it was business risk. I'd always push back to people and say, how much risk can you take? If we lost all these documents, what would that do? And I've always been pushed. that's how I've stayed sane through this, but I don't know in AI whether what the equivalent of that conversation is. So I think you have to do, you have to do a risk analysis on it, but you also have to do, like you said, business, the business pros and cons. What are you gaining? I think it's just like everything in the decision matrix. What are you gaining on the business side with regards to productivity, product ideas, product improvements, enhancements, revenue gains, versus what are the security risks behind this product? What are you willing to trade off? What systems do we currently have in place from a security and risk perspective that we can actually apply to this before we need to find something that's more fine-tuned to what this AI product is doing? If we need to enable this now, what do we currently have in our stack that will help us get us to that next stage where we find a vendor that can actually provide the solution for it? So I think it's about just doing that analysis, right? And not just throwing your hands up and going, no, I'm going to block everything. Because I think if you do the analysis, you'll realize that some of the tradeoffs are actually worth it. Yeah. I look at it. You said that AI is useless for accounting, and it probably is pure accounting. But I use it to do my expenses. Why? Oh, no. Because who cares? Honestly, if we have, if erasers are put in, not put into office supplies and they're put into software for, or whatever, for $3, it's not material. I don't care. So I will get it to do all the grunt work there and it fills in a spreadsheet for my accountant and we send it off. That is actually, yeah. So when I say useless for accounting, what it is, it still has, I want to clarify that, it still has some use for accounting. But the problem is, as my head of data science likes to say, my co-founder likes to say, at the end of the day, these LLMs are statistical matching machines. They are very good. That's why they're very good at coding. Repetitious patterns, right? When they see something they've never seen before, but it's not in their training data, they don't know what to do. So with accounting, every company's books is almost different, right? They're all different. So if you do deploy an agent on your accounting system, it needs time to actually be trained on what your accounting system is. And then it will probably work OK, but it can still run into problems because it also has to know every single rule of gap. Right. It's going to know all the rules of accounting. And then when you get to tax, it's a whole nother world. Right. But for things like expenses, of course, it's something that it can easily understand. It can easily autofill. And the progress that these AI companies like OpenAI and Anthropic have made on the office functions is remarkable. Because if we were talking a year ago, the spreadsheet thing wasn't possible. It would make tons of mistakes. And now it's gotten a lot better. Oh, yeah. And the interesting, it's really curious when you're in accounting and you were saying, I worked for a large accounting firm. That's how I learned consulting. And the accountants deal with errors better than technology people do. In other words, somebody would say to me, is this material? I walked into my partner's office one time and said, this doesn't add up. And he said, by how much? And I said, it's about like three bucks out. He said, look, go see a bookkeeper. I'm an accountant. And it was just that matter of fact that, yeah, okay. But they could accept imperfection if it was non-material and didn't reflect a larger pattern. And I had to respect that was a different way of looking at accounting. And I'm not dissing bookkeepers or people because you do want a bookkeeper to be exact. But in technology and cybersecurity, we try to be so right about everything that we make bigger mistakes on material things. So you don you necessarily don want to take the the Enron and Arthur Anderson approach That a whole different approach But with what you right about this with it with accounting accounting right You can drive yourself crazy trying to make everything perfect And for large organizations and enterprises, let's be honest, there's so many transactions going on, it's almost impossible to do that. But you can come with a reasonable degree of certainty. And I agree on the cyber side, there is still an importance. It's not necessarily being perfect, but being as secure as you possibly can and accepting that there's going to be problems. Because look, every day there's a new vulnerability discovered. That's how it can't be perfect, right? Almost every day we discover something new. So it's about staying on top of that and looking at it and saying, are we protected from this? Let's see what our platforms, have they been updated? Are we protected from this? Because if not, I think you'll drive yourself crazy. I know a lot of accountants that drove themselves crazy because they couldn't reconcile books. They couldn't, but it's okay. We'll figure it out, right? We'll figure this out and we'll take the steps to remediate it and protect our company from any of these risks. Yeah. And I think the best guys that I do would go back to the process and say, is there a process in place to prevent this? Yes. And they wouldn't be staring at the numbers. They'd be staring at the idea of how would an error happen? How would a material error happen? How do you prevent that? And I think that's a conversation they're used to having. A lot of it, too, comes back. People forget this. A lot of it, too, comes back to security training and physical security training. We always like to talk about the digital, but the physical part is so important. And I think getting your employees, getting your teammates to understand if something happens, right, if you lose your phone, you lose your token, you lose your laptop, do not be scared to reach out. You're not in trouble. You're actually following what we talked about. And I think there's a lot of people that are worried about, I'm making mistakes. I'm making a mistake that I'll get fired for. On the digital side, have you been compromised? Don't try to remediate yourself. You're not the security team. Actually, go to the security team and tell them immediately because, Jim, you know this, every second matters, right? And I think if you take that approach with your team and security that it's A-OK, people make mistakes, people get compromised, but it's about the reaction speed. How fast can you tell me that this is a problem? And I think for a lot of startups out, this is not that enterprises do a good job out there. But I think it's really important for smaller companies and more startups to take a more serious approach about their security and have this training and awareness. Don't rubber stamp it. That's that you don't want to do that. You want to have frank discussions with your team as many times as possible about emerging threats, about problems out there, about it's OK to reach out if you think something bad happened. Yeah, especially if you're wrong. And that's the, somebody said to me, how can you be so calm about this thing? And I'm going, what am I going to do? Y'all at the person? I said, they've learned, we've just had a great training session. They'll never make that mistake again. What do you say? But if you do try to block people out and punish them, they'll just hide it. They'll hide it. They'll be scared. That's the worst thing you can do. It's just having that sharing examples from out there in the cybersecurity community. of what's going on and making people aware of this stuff, all of a sudden they become more diligent in what they're doing. And that's like your first line of defense. Yeah. Yeah. So just backing up to RSA and what you're doing, you've gone there, walked around, seen the world. What is your takeaway when you get home? What are you thinking about right now? It's great. You're on getting some sleep. No, it's actually a great question. Yeah. So when you go to these conferences, it's crazy. I always say it's just, I don't even know how many people were there, but it seemed like 50,000 to 80,000 people. Every hotel is booked. Every restaurant is booked. All the big companies. I saw CrowdStrike took over an entire block in San Francisco. You go walk the expo floor, it's extremely overwhelming. Everyone has, they're all competing for the nicest booth, the nicest giveaways. but we take away from it is this take i think this year was really focused so many companies were saying we are we are solving ai security we're solving agentic security it was definitely the theme of rsa from all the vendors a lot of discussions were about that i think a cso told me i talked to a cso there he said i don't want to hear the word ai for a very long time after this conference because it was just not it was just non-stop some companies had ai free zones where they were not allowing anyone to talk about AI. So that was definitely the big theme about it. But also what's great about RSA is you get to reconnect with people in person. There's so much now done remotely. You're on the Zoom call and you don't actually get to meet up with people and connect with them on a personal basis. So that's number one. Number two, where I think a lot of these side talks, like a lot of the side discussions, not the main stage discussions, were amazing. I went to one with the head of security for Codex. That was fascinating. It's a fascinating discussion. There was another one with the CEO of SailPoint talking about that was more of a founder-based discussion. Another amazing talk. The Purple Book community talk was phenomenal, absolutely phenomenal. So I always say to people, you got to take time out to go seek these things out. It's not about going to the hotel suites, going to the parties, doing sales nonstop. No, go and actually listen to people building things, listen to their perspective on things, get as much intel as possible about what are new trends in the industry. Because if you don't, you're going to miss out on it. And I think a lot of people, it was sad to see sometimes these amazing experts in what they're doing, whether they're CISOs or AI, AI artists, engineers or AI builders. seeing some of these rooms half empty. And I was like, do you know what you're missing out on? This is amazing. So I think that's my takeaway from it. I got a lot of exercise in by walking around all of San Francisco. That was another good part because Ubers weren't available and there was traffic jams. But it's a really energetic environment. It's also I like about it too. You get the energy from all the people being there and all the hustle and bustle. But I got to say at the end of the day, it is great to be home and see the family again after doing all that traveling. I bet. And after that, and you can still tell me about your product. There are only you and me and 14,000 people here. No, I'm just kidding. Just kidding. Oh, it's okay. Sorry about that, Dave. This is the best job anybody's ever done of building curiosity for something. What's he doing? We've got to find out, which is good. Yeah, we're really proud. We're really proud of what we're doing here at Cifro and proud of the products that we built and really appreciative of the companies out there who decided to partner with us early on. And it's really great to see in the cyber community the willingness of some companies and CISOs to look at. They really seek out new products. They seek out new products for new problems. I would think probably many years ago, decades ago, it would be very difficult, number one, to build something as rapidly as we have built it, right, given the new technologies. But the overall just maybe, well, we're not going to bring you into a pilot program. I got to say, there's a lot of amazing companies out there that have these innovation programs. They have these pilot programs. They're proactive and looking for the new things. I just want to say thank you to those companies out there that are doing these things. It means a lot to the startup companies that are looking to solve emerging problems and threats out there, because without that, it'd be very difficult to build something that enterprises can use, right? You have to be able to partner with those companies to solve their problems. And without those programs, it's very difficult for companies to do that. Yeah. And what are you looking forward to the next year? The next year could be the next eight weeks. I would say it's the most exciting time ever I've seen in tech in general with the rapid pace of development that you can build things. Right. Because of the new AI coding tools, the rapid innovations coming from the AI platforms. Some of it can be scary to certain companies. Like we talk about the SaaS apocalypse, which I'm not necessarily a believer in. But I think there's going to be there's too much gloom and doom around AI replacing people. And I just want to say that to me and my co-founders, we have a much more optimistic view of the future. And that is, I think AI is going to enable all of us to actually not need to work as hard, but be actually more innovative. right we're going to be building new skill sets that are going to be awesome for the future and it's going to enable us too to have like we said before like one of our things at cifro spend more time with your friends and family right if you can be more efficient in your work and you can be more innovative and more creative i think that's a great thing for everybody so there's a lot of gloom and doom out there about ai replacing whole companies ai replacing tons of workers i i don't think that's the way it's going to pan out. I look at companies that decide to, for large enterprises, you always have to have the next generation come in. The next generation, you need to train them about what your company does, why it's important, what we do, how our products work. If you're not willing to give young people that chance because you think you can be more efficient with AI, I think you're missing out on the next generation that's going to lead your company. So I think it's just really important that instead of saying we're going to do less hiring, we're going to, because of these efficiency gains, we can lay all these people off. How about actually using these things to build new skills for people and build the next generation of people who are going to run your company? And that's an important thing that I think I'm just way more optimistic about the future with AI and what it can do for all of us. Yeah. Yeah. And it's amazing because I'm much more pessimistic than you are. I believe that, and by the way, I was informed today that this is the first year of World War III in the Star Trek universe, 2026. How about that? So I'm much more pessimistic than you are, but I come to the same space somehow. And that is, we are humans who have to live, and I go back to this thing of even CISOs, is don't kill the excitement. Like when people let them have the excitement, you can talk to them about the security of this stuff. But when you can reach people a lot better, if their first move is excitement and yours is listening. Correct. And for me, it's as a founder, you have to be an eternal optimist. I think it's very, there are going to be difficult times, but there's always tomorrow, right? There's always a way at the end of the tunnel. And I think that me and my co-founders have always been optimists about the future, optimists about what we're building. You have to do that. If you don't do that, you're probably not in the right business. It's easy to say there's a lot of competition. There's a lot of threats out there. There's always going to be competition. There's always going to be threats. You have to trust in what you're building, believe in your product. And that comes off to your team sees that too. And they see how positive you are. That really, it really makes a difference at the end of the day. Don't show frustration. Don't show anger. Show positivity and optimism. And good things will really happen. Yeah. And I think there's a lesson in that for us in cybersecurity as well, is that it's not that it's not hard. If it was easy, everybody do it. There are mother warned you there'd be days like this. She just didn't tell you how many there'd be. And I know as even somebody who started a business, getting up in the morning and facing the fact that you've got to go out and talk to customers, do all that stuff. And you have a half empty room when you, when it should be full. There's lots of disappointments in life, but the, but finding your way through them is probably one of the things that you can give to other people. Correct. And that's the sense that I'm engaged. I'm here. Yeah. It's Ben Horowitz's book, The Hard Thing About Doing Hard Things. And even though it's from a different era of building tech companies, the lessons in that book still apply. And it's really anyone interested in building their own company, I don't care if it's tech or you're building a consumer product, so definitely read that book because it goes through the ups and downs, the rollercoaster moments that you have as an entrepreneur. And it's, yeah, it's important to know, like, you can't get off the ride, right? But you're going to have good times, and you're going to have good times and bad times, and just get through the bad times. It's okay. And just the last thing I want to get from you is just as we close this off is truth. Let's return back to our starting point. Truth. What are the things that people should be holding on to in your mind that you should be thinking about? Because it goes back to this whole idea. We're adrift in a world where truth is escaping us at times. What are the key things that you think people should be thinking about? Well, it's interesting when you're in your intro talking about all the deep fakes out there, all the problems with social media. And it's I think the most important thing when you're looking at truth is, like we said before, take a step back. You don't get a question. If you start questioning everything, you're going to go crazy. Right. But take a step back. If you see something on social media that looks off, it probably is. seek out reputable publications, reputable authors, reputable journalists who are actually, they take pride in their work. They're reporting the truth as much. And that's the thing is, I think people give journalists too much of a hard time when they make mistakes because they have to report on things as they happen. And that's what corrections are for. Editor's notes are for, right? So seek out respected journalists who are actually trying to report the truth and the facts as much as possible, not trying to spin things in a certain way. I know it's more difficult to find than ever, but to get to the truth, you also have to do your own research. You have to question, I think now see more than ever on social media, it's rapid fire with things happening, right? And a lot of rumor and innuendo spreads and then people start believing it. But it's important to actually say, okay, is that really coming from a reputable source? Where is that coming from. And then with the deep fake videos, it was good to see that open AI is not going to allow Sora to do those things anymore. That's a kind of a dangerous thing where the AI technology has gotten so good that you can't tell the difference anymore. It's very difficult to tell the difference without that watermark. Very difficult to tell the difference. But what I noticed was my friends would share videos and they wouldn't even see the watermark. And I had to say to them, that's not real. You know that, right? And people are just, they're in their phones. I just grabbed my phone. They're in their phones and they're doom scrolling. And take a break. Take a break from that because the world's not that bad, right? The truth out there, it is out there. If you do your own, if you take a step back and do your own research and question things, but don't go crazy thinking that everything is a conspiracy theory, right? Some things are, right? Some things are. There are a lot of stuff on social media is. But seek out reputable people who report on the truth as much as they possibly can. Cool. Rob, this has been fantastic. Thanks. We've had a wide-wandering conversation, but it's been great. And I hope people will draw the parallels to things that they can take back into the security environment. Absolutely. Because we are all people, and that's really what I sound like. We are the world. I'm just drinking a song here or something. But it's the technical part's the cost of admission. Being able to deal with people and the issues that come up from that are probably, that's where you get the excellence. Yes. Yeah, you're right. And, Jim, I really appreciate it. Thank you. Thank you so much for having me. This was a great conversation. You're coming on here, this program, to announce your product. First announcement happens here? Well, yeah, this is when we're still on stealth. When you come out of stealth, though, you're coming here, right? This is the first podcast we've done as a company, correct? Okay, yeah. Nice to meet you. And say hi to your partner for me. And enjoy being at home for a while. Thanks, Jim. I appreciate it. Take care. Bye. Bye. And that's our show. We'd like to thank Meter for their support in bringing you this podcast. Meter delivers full-stack networking infrastructure wired, wireless, and cellular to leading enterprises. Working with their partners, Meter designs, deploys, and manages everything required to get performant, reliable, and secure connectivity in a space. They design the hardware, the firmware, build the software, manage deployments, and run support. It's a single integrated solution that scales from branch offices, warehouses, and large campuses all the way to data centers. Book a demo at meter.com slash CST. That's M-E-T-E-R dot com slash CST. I'm your host, Jim Love. Thanks for listening. Thank you.