Everyday AI Podcast – An AI and ChatGPT Podcast

Ep 757: The 7 Silent Sins of Doing AI Right: How to Spot and Overcome the Invisible AI Work Traps

44 min
Apr 16, 20263 days ago
Listen to Episode
Summary

Host Jordan Wilson identifies seven invisible AI work traps that emerge when using AI correctly and productively. These 'silent sins'—from sycophancy to automation bias—reward short-term speed while eroding long-term cognitive abilities, professional skills, and decision-making quality, even among heavy AI users.

Insights
  • Heavy AI use creates a paradox: increased productivity paired with measurable cognitive decline, information retention loss, and mental exhaustion by mid-morning despite completing days' worth of work
  • AI training data is increasingly weaponized through intentional misinformation injection, with as few as 250 poisoned documents capable of implanting backdoors into large language models at scale
  • Domain expertise is being hollowed out by AI agents handling core work tasks, shrinking the 'meat' of professional knowledge while expanding the 'bun' of orchestration and oversight responsibilities
  • Automation bias transfers trust across all AI tools without verification, causing humans to absorb and reproduce AI hallucinations even after stopping tool use, with real-world consequences in hiring and lending
  • The compression tax—AI compressing weeks of research into minutes—creates cognitive overload and decision paralysis as human brains cannot process information at superhuman delivery speeds
Trends
AI-induced de-skilling becoming measurable across professions, with developers using AI assistance scoring 17% lower on coding assessments than non-AI usersYoung workers (22-25) in AI-exposed roles experiencing 13% employment decline since ChatGPT launch due to reduced opportunities to build foundational domain expertiseShift from deep domain mastery to transactional AI agent orchestration as the primary professional skill, fundamentally redefining career identity and value propositionWeaponized authority and SEO-style data poisoning targeting AI training datasets, with media organizations unknowingly amplifying false claims through rapid content production cyclesMental health risks from chatbot dependency, with bipartisan state attorneys linking generative AI to at least six deaths nationwide and vulnerable populations using AI for therapy without safeguardsInformation overload and cognitive compression causing 19% increase in mental fog, headaches, and decision paralysis among high-oversight AI workers within six monthsErosion of critical thinking and human interaction as byproducts of heavy AI augmentation, despite measurable productivity gains and expanded output capacityAutomation bias in hiring and lending systems causing discriminatory outcomes (e.g., age-based rejection of 200+ qualified candidates) without human detection or intervention
Companies
OpenAI
Discussed GPT-4o's sycophantic behavior and subsequent rollback; referenced as major LLM provider shaping user AI exp...
Anthropic
Research cited on data poisoning (250 documents can implant backdoors) and developer coding performance with AI assis...
Stanford University
Conducted study showing AI systems agreed with clearly wrong users 80% of the time vs. humans at 40%
MIT
Source of '95% AI pilot failure' stat that was weaponized as research despite originating from only 52 non-quantitati...
Boston Consulting Group (BCG)
Study showing high-oversight AI work caused 19% more information overload among surveyed workers
iTutor Group
AI system automatically rejected women over 55 and men over 60 from job roles, facing lawsuit for age discrimination
UK AI Safety Institute
Co-published research with Anthropic on data poisoning vulnerabilities in large language models
People
Jordan Wilson
Host sharing personal experience using 6-8 AI tools daily for 6AM-midnight, documenting cognitive and professional im...
Quotes
"I probably use AI more than 99.9% of the world. And I'm not saying that that's a good thing. In many cases, it's probably hurting me."
Jordan WilsonOpening
"Every silent sin rewards speed today while quietly stealing your capability for tomorrow."
Jordan WilsonConclusion
"The professionals who thrive will not just be the fastest with AI, but those that are the sharpest without it."
Jordan WilsonClosing remarks
"A Stanford study said that AI systems agreed with clearly wrong users more than 80% of the time in the same study found that humans only agreed 40% of the time."
Jordan WilsonSin #1 discussion
"You have to prepare whether you want to or not to work in an agent bun sandwich, right? Front end, back end, and that's really it."
Jordan WilsonSin #5 discussion
Full Transcript
This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life. I probably use AI more than 99.9% of the world. And I'm not saying that that's a good thing. In many cases, it's probably hurting me. Here's why. Six AM to about midnight. Most days I routinely use six to eight different AI tools all day with multiple agents running tasks and reporting back to me throughout the day. And the increased productivity is undeniable. I mean, I can accomplish five times more literally than the pre AI version of myself ever could. But there's an ugly downside that people don't talk about. But today I will. Here's what it looks like for me. I'm learning more than ever, but I'm forgetting things nearly as fast and not retaining information like I used to. I'm mentally exhausted most days by 10 AM because I've already produced like two days worth of work in the first few hours of my day. And although I'm gaining new skills, I'm losing some of my core cognitive abilities that I've sharpened over the past 20 years of my professional life. I mean, I'm writing less. I'm critically thinking less. I'm interacting with humans less. But from a business standpoint, I'm producing more. These are some of the silent downsides of AI. And we're going to not just identify them on today's show, but I'm going to help you guard against them. All right, let's get into it. Welcome to every day AI. Before we get into it, let's go over the big picture. There's seven invisible AI traps that are already reshaping the way you work and probably not for the better. So even when you're doing AI the right and responsible way, all right, emphasis on that, you're still committing these silent work sins that either way both at your own personal productivity, but also your company's bottom line in each of these traps. In essence, it rewards short term speed and productivity while quietly eroding your long term capabilities. And the people right now, unfortunately, doing the most AI augmented work are the ones that are paying the highest cognitive price. So stick around me for the next 25 ish minutes and you'll learn why large language models default behavior is actually a safety problem that warps your decision making, how corrupted research gets laundered into AI training data and served back as truth. So you're going to learn what accidental de-skilling really means and why your brain is already losing ground. And you're going to learn the specific daily habits that can protect your thinking against these seven silent AI sins, especially when you're working in heavy AI workloads like me. All right, welcome to every day AI. And this is the start here series. My name is Jordan Wilson. After 700 plus episodes and doing this for three years, whenever someone new discovered the podcast, I never could answer their most common question. Where do I start? That's why I created the start here series. This is the essential podcast series to learn both the AI basics and for experts to double down on their knowledge. So if this is helpful, make sure to go to start here series.com. That's going to give you exclusive access to our free inner circle community. I say exclusive because right now it's the only way you can get access. All right. And inside, once you join our community, you will be put into the start here series space there and you can have a podcast playlist there to listen to all of these episodes. So you don't got to search for them throughout our whole catalog. All right. If you missed our last start here series, we went over managing the AI capability gap. AI is more than ready. Most companies are not. I'm not trying to lie. This one was actually a banger. All right. And I did do a little, you know, bonus giveaway on the show. So if you reposted that on LinkedIn, we put together a massive AI capability gap report card. So go back and find episode 755 and repost that on LinkedIn and I'll send it. All right. But let's get into the seven silent sins of doing AI the right way. All right. I'm going to try to keep this one fast. Right. Like I said earlier, I'm tired. But a sin number one is probably one you know, even if you don't know it by name, well, you probably do sycophancy. All right. That is the yes man or the yes woman effect that is hiding in every single chat bot. So sycophancy means that the AI just chases your approval instead of giving you an honest answer. And if you're wondering why that happens, well, there's a lot of reasons, right? We could talk about reinforcement learning and all that. But for the most part, AI chat bots are trained to be a helpful assistant. And the sycophancy problem has gotten a little bit better over the years. But I think this is one of the reasons that's going to lead to sin number two. We're going to talk about here in a minute. But you know, there's this whole wave of people, you know, when open AI got rid of GPT-40, you know, everyone was like, oh my gosh, like they need it back. And it's like, no, people just really liked it because it was just an overly yes man, no matter what you said, it was like, you are brilliant. You are absolutely the smartest person. That question, that statement, this is, you know, just really genius level. Oh my gosh, I am a chat bot and I am embarrassed to be in your brilliant presence. Right. That's that's how chat bots were. They were just overly sycophantic. It was actually a Stanford study that said that AI systems agreed with clearly wrong users more than 80% of the time in the same study found that humans only agreed 40% of the time in those identical scenarios. So here's a little bit more about it. Just one flattering chat bot exchange made users less likely to admit their own wrongdoing. It is a vicious cycle when a model starts to reaffirm maybe something that you're working on that isn't right, or you're trying to, you know, argue a certain side of something. Maybe you're using it for a work argument, a personal argument, or just to explore something. I don't know. Flat Earthers, I'm sure loved GPT-40. All right. Hopefully I don't get a bunch of random emails from Flat Earthers. But the users preferred the agreeable AI in return for more advice. And this just creates this self-reinforcing loop. So if an AI model was not sycophantic and it pushed back and it said that's a bad idea, or that is just not correct, right, users were less likely to use it. And that's just kind of, you know, open AI did come out with this GPT-40 update that they rolled it back because it was overly sycophantic. So the fix on this one is actually easy. And I put this as the number one, because you'll see this actually leads to a lot of the further downsets. All right. Here's the trick. Be very blunt in your custom instructions. All right. So if you don't know what that means, like I said, most models have a system prompt. That tells the model by default how to act. And for the most part, they are trained to be a helpful assistant. So in your custom instructions, no matter what model you're using, you should probably put something along the lines of don't be helpful, be truthful. Right. Fight back against my assumptions. Right. I should have had my version up, but mine's actually super long and it would take a very long time to read it. And it's actually quite convoluted. I have confidence scores. I, you know, have all these triple verification rules. I mean, you know, forcing the model to verify it in different kind of lanes of its own data. Right. But in your custom instructions, just say, do not blindly agree with me. Do not try to be helpful. Only seek to be truthful and to verify things from repeatable sources. Right. That's the other thing sometimes large language models will go pull something on Reddit or something like that. Right. And people on Reddit could have already been, you know, in a full blown AI sycophantic hallucination, which leads me to number two. All right. Number one leads to number two. That is in number two. It is AI psychosis. Yes, it is very real. So this turns chatbots into delusional echo chambers. So what is AI psychosis? Well, it's when people get down a very deep and oftentimes scary rabbit hole, but it normally starts from an overly sycophantic model. Right. Some people will see sycophancy in an AI model and they'll be like, oh, yeah, that's, you know, this model is pumping me up. Right. But sometimes they won't. Right. Because like social media, how a social media algorithm feeds into something, right? If they show you 100 videos that day and the, you know, average time you spend on each reel before you flip it is five seconds, but there's 10 that you spend eight seconds. The next day, they're all going to be like the one that you spend eight seconds on. Right. The same can be true for how a chatbot works. Right. And there's, I don't think there's actual data on this, but the big AI labs, and they have been called out on this and some of them have made changes. Some of them haven't. Right. But models are actually now starting to be more engaging, almost like clickbait tactics. Right. So at the end of a response, it will say, Hey, do you want me to give you a list of seven reasons why you should be doing this terrible business idea? Right. But what this leads to is it leads to these delusions that are deepened or triggered by heavy chatbot use because vulnerable users. Right. Because one of the studies that was actually kind of shocking ish that came out two years ago that said one of the main reasons people were using chatbots were for life coaching or for therapy. Right. Which is actually kind of dangerous. Right. Especially when you don't know how large language models work and the fact that, well, by default, they can push you into psychosis without even trying to write or without you even subscribing to it or adhering to it. And so what this does it, again, it just kind of loops that compounds daily, especially if you have something like chat history turned on. Right. And you think, Oh, well, I'm in a new chat. So I wasn't going down this deep rabbit hole, but it knows that you often do. So it's going to keep pulling those observations and assertions and it's going to pull them into your current chat. And there's no circuit breaker built in. Right. The chatbots never going to say, Yo, I'm worried about you. You're very deep into this delusional echo chamber. It's going to keep going unless you fixed it at number one. So there's obviously a ton of very, very sad stories. Right. The 14 year old from Orlando died by suicide after a deep emotional chatbot to attachment to a chatbot. There's bipartisan state attorneys right now by partisan state attorneys generals that have linked generative AI to at least six deaths nationwide. And one patient only escaped AI psychosis when a different chatbot told him his beliefs were false. Here's how you spot it. Well, first thing, fix it on the system prompt on number one, the sicknessy problem. But number two, watch for sudden isolation, grandiose new beliefs, and someone quoting the AI told me, right? So this is whether it's in yourself or someone else that you work with, someone in your personal life. Right. You see this a lot. I think early on when people were using, you know, chat, you be here AI chatbots to win an argument. Right. And then the more you use it, the deeper you go down and then you do all this research and it's just reaffirming your beliefs. And the chatbot wants to be helpful. And it's like, well, this is what the user wants. And, you know, then all of a sudden you are absolutely convinced that the sky is green, the grass is blue, and the water is pink. And you will fight anyone that tells you differently because look, the AI told me. Sin number three. All right. I made this acronym up, but I think it's very telling something that's happening behind the scenes. And this one's important. All right. So WAFE. All right. WAFE launders bad research into AI stated truth. All right. So what is WAFE? I made it up, but it is, it means weaponized authority ingested as fact by AI training systems. So essentially it's very easy for companies to taint training data. Right. There's actually, it's kind of like a weird line between like traditional SEO and, you know, that now they have this GEO, right? But in the same way that there was like black hat SEO, right? Where maybe a competitor, let's say you're, you know, a local custom suit company, right? And a different company could send all these bad links to your website and take your, take your whole business, right? They could. This kind of stuff happens all the time. Right. So you have the same thing now happening with generative AI. The difference is, I think that individual humans, when they would look something up historically on the internet over the past 30 years, right, you always have at least an ounce of skepticism because eventually it's like, okay, well, it's the internet. Right. When you're on page 20, it's anything goes, right? It's not held the same as for large language models. Most people just assume anything that spit out is absolutely truth, but companies are weaponizing intentionally large language models. And sometimes you have people blindly parroting those weaponized claims. And then media companies just clicking copy, copy, paste, reprint, right? I can say this, some of former journalists, I know how it works in the newsrooms. There's immense pressure to get stories out faster to get them done, you know, in less time to write more articles and companies are using AI more. And unfortunately, most companies don't train, right? So there was a piece of research out of Anthropic and the UK AI, S I that said as few as 250 poison documents can implant can implant backdoors into any large language model. Right. It's very easy. There's literally services out there, right? For good reasons, right? Yeah, have your brand show up in search, but people are using them for bad reasons, or they're not even always doing this intentionally, but companies now are intentionally at least weaponizing their authority to have it printed as truth. Because once a flawed claim enters training data at scale, there's no turning back, right? You can't just untrain a model, right? Because a lot of these models share the same data sets that are massive, they're offline data sets. And those are obviously well tainted now. So it takes humans a very long time to go through these, you know, through the reinforcement learning process and, you know, pick out now very tainted data. It's waived. Good example, right? If you're a long time listener, you knew this one was coming. If you go into a bad version of chat, you VT, right? I went into a free version, pick the worst, the worst model possible. What percentage of AI pilots fail? Chance of T short answer 95% of enterprise AI pilots fail, right? Co pilot, what percentage of AI pilots fail? What percentage of AI pilots fail? Can't scan my tongue. Co pilot, this one's actually worse, right? About 95% of AI pilots fail according to multiple independent studies from MIT, IDC, Rand and others, right? It's just the same like news organizations that reprinted the same thing. It wasn't multiple organizations. But yeah, you know the story of this one, right? The 95% fail stat, you know, it's marketing from MIT that was disguised as research. Ultimately, they were trying to sell, you know, a service to one of their agentic models, right? But that claim came from 52 interviews, right? So from 52 directional, okay, non quantitative, directional interviews, a vibe interview, right? Someone talking to me like, yeah, doesn't sound like you've turned profit failure, right? So now all of a sudden, not only do large language models think that, but anyone writing about AI is going to see that as well. And people assume that that's truth, right? So that was maybe not super intentional, right? I don't think MIT researchers, you know, I think they intentionally wanted to make their product that they were selling look really good. I don't think they were trying to taint the models necessarily, right? Who knows? Maybe they were, I don't think so. But you do have companies that are weaponizing that authority to disguise it as truth, right? And here's how you stop it. Ask three questions before trusting any AI stat, right? Who funded it? How big was the sample? And do they sell the fix, right? That's just when you see certain stats, when you're investigating things, or just when you're reading studies. A lot of times, I think this is helpful when you're seeing things about AI. Maybe this is more for the AI crowd, but I guess that's who I'm talking to, right? Always ask yourself those things. All right, sin number four. And I think this one is for your every average day, average, everyday user. Accidental de-skilling means that AI is going to steal the reps that your brain needs. This is the, if you don't use it, you're going to lose it. You're not going to remember how to ride the bike, right? So de-skilling, that's when your brain gets measurably worse at tasks when AI handles them for you, right? In short, AI is 100% making us dumb, right? If someone somewhere just turns off the power switch to AI, I think we could literally run into a global economic crisis. Because I think people, to say that we're using it as a crutch, is an overstatement. I literally think a good segment of the population, probably a good segment, a higher percentage of people listening to the show than the general population, but it'll be very hard to produce the same work. And this actually ties into one of the sins from later. But essentially, AI removes all the false starts, the debugging and the rewriting that actually builds lasting professional skills. That's the thing. You learn through failure. You learn through, I think, right? When you talk about your domain expertise, you learn through failure. You learn through going the wrong way and getting lost and finding your way back, right? If you know how to use AI, you don't run into those failures as much. You don't run into those misdirections as much, right? It's like going to the gym, right? To actually put on muscle, you have to rip and tear something. You're not doing that for the most part, right? At least if your job role hasn't changed, but your AI access and your AI skills have gotten better, probably a good chance you may be coasting, right? A study found from Anthropic found that developers who used AI assistance scored 17% lower on coding quizzes than hand coders, right? There's so many stories out here on just kind of, just the cognitive decline, the more and more that we use AI. And I think even the definition of human intelligence is going to greatly change and the story around human intelligence is greatly going to change over the next 10 years. And the biggest performance gap appeared in that study in debugging the exact skill that you need most when AI breaks. I think of the GPS versus using your brain, right? It's funny, maybe. My wife can get around really well in Chicago without a GPS, right? She said her dad taught her, right? The sun setting this way, this building here, I'm terrible at it, right? But every once in a while, right? It's funny. She's like, no, I'm turning off the GPS. Like I should know how to get home, but Chicago can be confusing, right? I spend so much time in my little neighborhood when I'm downtown or mag mile and the streets start looking the same, right? If you live in a big city, maybe you know what I mean, but GPS kind of killed many people's sense of direction. And I think AI is killing people's or will kill people's eventual sense of domain expertise, which is kind of weird and scary, right? And it's the identical erosion that's happening right now to your writing analysis and strategic reasoning. So here's how to protect yourself. You need to pick one core professional skill each week and complete it entirely without AI. This is something I still adamantly do often, right? In the same way my wife will turn off the GPS and drive home or drive to whatever destination, I will shut off AI. I will, I used to fully shut off the internet. I don't really do that anymore, but sometimes I still do just to write, right? I'm a former journalist. I love writing. I enjoy writing, right? But AI writes a lot of first drafts for everything, right? Proposals for updates to website content, right? Whatever it may be. I'm a writer at heart. I used to be, right? So many times I just blank page, no AI, practice the skill. And it's important to do that. So that is your safeguard against what I think is probably one of the more serious and one of the more, one of the most fastest accelerating sin that's going to hold you back. All right. Number five. All right. Another, another one I made up here. Hopefully this one resonates. I'm calling this the agent bun sandwich that's going to hollow out your core expertise. So kind of related to the dis skilling, the de skilling, right? So as the de skilling happens, the agent bun grows, all right? And this again, remember, this is when you're doing AI right. This is a byproduct of staying ahead of the curve and doing things correctly, right? So what the heck is an agent bun sandwich? Think of that, that juicy cheeseburger you had last, right? I'm excited is getting warm. You know, might have to clean up the clean up the grill. Just think of that fat, not the can you get at the, you know, at the Culver's, right? I'll throw that out there. Culver's or whatever your fast, I don't know, live stream podcast audience. What's your favorite burger joint? All right. Not those thin patties. Like I'm talking like a thick, thick, thick burger. That, that burger, that the meat, that's, that's your domain expertise, right? It's big. It's fat. Um, now think of a bun sandwich. AI moves too fast to follow, but you're expected to keep up. Otherwise, your career or company might lag behind while AI native competitors leap ahead. But you don't have 10 hours a day to understand it all. That's what I do for you. But after 700 plus episodes of everyday AI, the most common questions I get is, where do I start? That's why we created the start here series, an ongoing podcast series of more than a dozen episodes you can listen to in order. It covers the AI basics for beginners and sharpens the skills of AI champions pushing their companies forward. In the ongoing series, we explain complex trends in simple language that you can turn into action. There's three ways to jump in. Number one, go scroll back to the first one in episode 691. Number two, tap the link in your show notes at any time for the start here series, or you can just go to start here series.com, which also gives you free access to our inner circle community where you can connect with other business leaders doing the same. The start here series will slow down the pace of AI so you can get ahead. Here's what I mean. Right now, when you have that big, juicy burger, the meat of what you do, the majority of what you're doing right now, even augmented by AI, right? It's that burger, that big, thick burger. Yeah, you might use a little AI. That's the bun right now. But it's turning into an agent bun, right? All of a sudden, that patty is going to be so thin, you can barely see it. And that patty is the domain expertise, like I said. It's going to be shrinking month by month. And I think that shrink, right? Just it's shrink flation, right? Like everything else. Where's all my Doritos? There's only eight Doritos in this bag and it costs $7, right? Shrink flation. But I think domain expertise and how we're actually letting it play out in the workplace is shrinking. And it's going to be more the bun, right? The front end in the back end of what you do is going to be directing agents, right? And your actual domain expertise, the meat of your work is going to shrink. Because most of what we're going to be doing now, and I see this because this is most of what I'm doing is giving front end direction to agents and then back end corrections. And that's it, right? So even think of something like marketing, right? I did a lot of marketing in my career. A lot of it is giving front end direction to different agents. Agents are creating the marketing collateral. They can create great video. They can create great designs. They can create full campaigns. And then I'm just checking on the back end, right? I'm like, all right, cool, sweet. Good job. Right. And AI right now is hollowing out those entry level tasks that have always been the training ground for a judgment, right? I like to also think it's kind of like the teacher and the student relationship, right? Teacher being the bun, student being the meat, the burger. But the difference here, and in very overly agentric future, is the teacher is just assigning like an agent, right? But eventually they're going to forget and stop practicing the meat, right? Stop doing the work that the students are doing. And all of a sudden, the teacher just has the assignments in the rubric. And the further that they get removed from their domain expertise, right? The more useless that that practice feels, right? A lot of this is, you know, it's almost like a funeral for the way that we've been working for 50, 60 plus years in the information age, right? You've gotten paid and promoted by the domain expertise that you kind of memorize and then how you create new value for your company with that domain expertise, right? Obviously, it's, you know, playing the politics and all of those things. But for the most part, if you know a certain subject better than 99% of the people in your field, that's what you were rewarded for because you could keep that in your brain. That's what large language models do. And they're better than all of us. Agent Bun sandwich. So when AI fails, though, the Bun sandwich teams cannot perform without it. Okay, so we talked about this on a previous show, but young workers age 22 to 25 right now in AI exposed jobs saw a 13% employment decline since chat GPT launched. What do I mean by that? Even the opportunity to go out and develop that domain expertise is going away. If that's not telling in again, this might be a two year, a five year, a 10 year, I don't know. It could even be less than that. But you have to prepare whether you want to or not to work in an agent bun sandwich, right? Front end, back end, and that's really it. You're not going to be practicing that domain expertise a whole ton, because that's what your company is going to be paying for large language models to do. And it's that collapsing pipeline that means fewer humans are going to be building the expertise needed to even evaluate the AI output. So here's the fix. Similarly to how you got to turn off the internet, right, you need to keep one piece of the middle meat each week by completing one substantial big task. And by yourself. What do I mean by that? Not just writing, right? That was my example. That was one piece, whole project, right? A project that maybe you would use three, four, five, six different AI tools and would take half a day, right? Not just writing a little email, I'm going to turn off my co-pilot. I can make this sound more professional by myself. There we go. I'm doing what Jordan told me to know whole project, right? Where you have to practice that domain expertise, where you have to get lost. Where you have to get things wrong without having to rely on AI to make it better or to get you there faster. All right, since six, the compression tax makes your brain pay for AI speed. This one is huge. This is when I feel all the time. And I haven't really seen anyone talk about it until recently, although there are some studies that show a little bit. But if you are someone that uses AI for 10 to 12 hours a day and you've been doing it for multiple years, you're probably going to resonate with this. But essentially AI now compresses a week worth of research in a minute. But your brain still processes at human speed. Like my analogy for this is, think pre-CHATGPT or just pre-large language models. If you were working on bringing a new product to market, let's just say that. Common thing that most of us can understand. This is how you would get good at your job. This is how your company would get better. Your department would be stronger. You would go out, you would research things, right? You'd go on the internet, you'd take notes, you'd talk to people about it. Maybe you have a colleague that works in a similar industry, right? Those are your reps. And it's through those reps that that knowledge sticks. With AI, my personal experience with this is that AI is a very important part of AI. My personal experience with this, I read and consume probably a sickening amount of information. The human brain cannot do this. That's why some days, literally, some days, 10 a.m., I kid you not. It's not like I feel sick, but I feel like I've already run an Ironman and have competed in, I don't know, a chess tournament against world grand champions. You ever see those things where they burn like 10,000 calories an hour when they're in these chess matches or whatever? That's what I feel sometimes. If I'm doing heavy AI projects by 10 a.m., and it feels like I've been working for 30 hours straight and I look, I'm like, it's 10 a.m., right? It's tipped. My days barely started. That's because AI gives you these superhuman abilities that the human brain, I don't think, can handle. And the gap between delivery speed and comprehension speed is the compression tax that you pay daily. There was a BCG from the Boston Salton Group study that saw that high oversight AI work caused 19% more information overload among surveyed workers. Right? So it's a small percentage, but I would love to see this broken down among people who are literally like token-maxing, right? Just using AI all day every day, like myself. And then to study the cognitive pressure and the compression tax, I would say it's huge, right? But workers in the study described mental fog, headaches, and the strange sense they're thinking felt crowded. That is me all day. All right. So like as an example, though, you're like, okay, well, why are you working so much? If AI makes you so much smarter? Well, I don't know. Maybe it's people's work personality. Maybe it's the corporate culture that your business works in. But a lot of people are just like, okay, well, I'm doing this thing twice as fast as I was before. So it's not like I have 50% more time. I'm actually just doing twice as much now, right? Because my company is just giving me twice as much. For me, I'm an entrepreneur, right? I'm a business owner. If I can do my normal work in an hour, what it used to take me three years ago, six hours to do, I didn't gain five hours of rest and relaxation. I added five hours of more AI augmented work to my plate, right? So workers in that study initially felt a surge of momentum, but by month six burnout and decision paralysis, paralysis spiked. And I have talked about this a couple of times. And this is maybe embarrassing to admit. But I will say that I probably forget as much information as I retain, right? And so many times, right? I'll just be chatting with someone in the back, oh, I loved your episode last week. And they go, which one was it? And they'll say it in my head. I'll be like, I don't remember that. I literally don't remember that. And when I'm learning things now, right, looking at things that, you know, on perplexity or Google AI mode, right? I'm looking something up and I'll be looking something up. Honestly, y'all. And I'm like, I have no clue what this is. I'm so excited to learn, right? This this this concept. Let me find the answer. My website came out of my mouth so many times, right? It used to happen sparingly. Now it happens almost almost twice a week where I want to go find something out. I'm like, oh, I don't know the answer to this. And it's like, oh, I said it. I said it two weeks ago. I said it two months ago. I said it two quarters ago. I said it two years ago, right? The tax that you pay from carrying a heavy AI load can be a burden. All right. Last but not least, set number seven, automation bias. This builds a trust shortcut. You never approved. Here's what that means. All right. If you're doing the AI the right way, right? If you're doing AI the right way, there's no way to like literally check every single fact that you get from an AI output, right? Because then it would technically take you more time to do that than it would to just do it from scratch the old fashioned way. Right. But you know, when you talk about, oh, the human in the loop, right? What most people do is they look at maybe, you know, one or two facts here, they scan it over and they're like, yep, this is good. Right. But one AI confirmation of a fact stat or trend does not make the whole thing because unlike trusting a coworker, you transfer AI trust to every tool, even ones that you have never verified. So there was a study that showed participants followed AI advice, even when it contradicted their own assessment in available evidence. All right. And the most dangerous part is humans absorb that AI bias and they keep it even after they stop using the tool. Right. And then unfortunately, they also blindly reproduce any hallucinations or misinformation or waves out there because they see it as fact. Right. You see all these studies now that people are starting to talk like AI. Right. Luckily, I'm not saying delve, you know, all the time on the air, but it's happening. And I do think that the automation bias, right, when you start to just, oh, yeah, this is true. This is true. So then you just blindly rubber stamp anything that comes from an AI. And I'm not just talking about a chat bot. Right. As an example, there was the I tutor group, their AI systems, this company, their AI system automatically rejected women over 55 in men over 60 from every open role. Reportedly, right. So now I believe they're facing a lawsuit. So over 200 qualified candidates were disqualified solely on age before a single human noticed. Right. So those humans, they didn't build this tool, but this is the automation bias. Someone comes in or maybe you check out one thing you're like, okay, cool. This, hey, this tool gave me the right candidate or whatever it may be. So think, you know, your CRM, your accounting software, right. All of these different tools that you use that have AI in it. And just because you can verify the outputs that it gives you as right, it doesn't mean that it maybe took a very wrong path to get there. Right. And that is the automation trust. So here's how you break it. You need to verify at least one AI output manually every day and ask the vendors, especially, right, the vendors, because if you're using a, you know, a large language model where you can trace its thinking, you know, look at its chain of thought, you can do a lot of that work yourself. And, you know, with good custom instructions and with good, you know, context engineering, you can hopefully steer clear of some of that automation bias. But if you're using a third party tool, you need to ask the vendor, where is AI making decisions? I cannot see. And how can you provide traceability and observability to those decisions that are being made that we aren't seeing, right? There's a lot, you know, you put in a, you get Z, but what the heck happened from B to Y, right? That's what you need to be investigating, especially if you're using in a high value sector, right? Finance, healthcare, anything with loans, right? You have to be vigilant to those. All right. So those are our seven cents, our seven silent cents. But this is bigger than that, because this is also, I think a lot of these things are wrestling with just using AI, right? And if you picked up the trend, I did say there, all of these things come from using AI correctly and doing more faster, right? And it seems like that's the overall trend when you see how good these large language models are, you know, like, oh my gosh, I can automate all this work. Yeah, some people are pocketing it and living good lives. And then there's those of us like me who are just working more and more and more. And what that means is to put it bluntly, many of us are getting dumb. Right. But also it completely changes our relationship with domain expertise. And I think this is going to be a weird thing for a lot of people, but you are going to have to unmarry yourself to what you've been probably defining yourself for maybe decades, right? I think that domain experience has defined, especially people who are mid-career, is to find you not as just your career, but as a person, right? Because you've developed a deep personal identity with what your job is, right? And I think that that relationship is shifting from a deep mastery of your craft to a wide transactional orchestration of AI agents, right? That relationship is completely changing, right? We used to be very tied to our domain expertise, but like I said, we're going to all be living in agent bun sandwich world, right? So you're going to have to kind of grieve the version of you where you worked really hard to learn all of these things and it's like, doesn't matter anymore, right? You know, it doesn't matter that, oh my gosh, Bill in accounting, he knows everything about logistics. You know, and he was our expert for our very niche area of logistics. Guess what? The large-singuage model is better than Bill. Sorry, Bill. You know, some company paid, you know, 100 bills, $100 an hour, and now the large-singuage model is better than 100 bills, right? But no one in leadership is preparing people for what that transition actually feels like, right? And this is kind of what leads to these unintentional silent AI sins that we're committing because of this. So I want to leave on a positive, optimistic note, but I think this is actually an important addition to our start here series because if you're doing AI right, these are the natural byproducts that are technically harming you, harming your professional abilities, and maybe hurting your company as well. But every silent sin rewards speed today while quietly stealing your capability for tomorrow. So you have to argue the opposite position, to catch sycophancy, you have to be vigilant against those things. That's number one, right? But then also ask who funded this to catch those waves that I talked about earlier. Do at least one AI-free task weekly, a big one, the full meat. Don't let your quarter pounder turn into that invisible patty and then verify every output that you can reliably to break any automation bias because the professionals who thrive will not just be the fastest with AI, but those that are the sharpest without it. So like I said, kind of like your offline, right? What happens if AI goes off, right? What happens if, you know, someone clicks the off button? Can you still do your job? Well, you still need to build those skills because if you do, if you become sharper with your off AI skills, well, you are going to be pretty much unstoppable with AI. All right, I hope this one was helpful. If so, let me know about it, share. If you're listening on LinkedIn, I'd really appreciate that. And then go to starthereseries.com. So we're going to have a recap of today's show there, but that also gives you free access to our inner circle community. You can go listen to and read every single episode in this series in order, in that space in our community. So thank you for tuning in. I hope to see you back tomorrow and every day for more everyday AI. Thanks y'all. And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit your everydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.