It's Been a Minute

You might be suffering from AI brain fry

22 min
Apr 13, 20266 days ago
Listen to Episode
Summary

This episode explores 'AI brain fry,' a newly researched phenomenon where workers experience mental fatigue from managing multiple AI tools simultaneously. The discussion examines how AI adoption is reshaping workplace dynamics, blurring work-life boundaries, and creating new forms of stress rather than reducing workload as intended.

Insights
  • AI brain fry occurs primarily in environments with poorly defined roles, vague AI policies, and manager expectations to 'figure it out,' leading workers to overuse tools and create unnecessary work
  • Using AI for repetitive tasks reduces burnout, but deploying it across complex, varied tasks amplifies cognitive load and creates a simulation of management without actual promotion or compensation
  • Workers are experiencing a boundary dissolution similar to the Slack/chat adoption era, where AI tools create ambient urgency and expectation of constant availability outside traditional work hours
  • The phenomenon reflects broader workplace anxiety driven by narrative pressure—fear of being left behind, LinkedIn performativity, and company-wide cost-cutting initiatives justified by AI capabilities
  • AI adoption is likely to correlate with increased employee surveillance and quantification of work metrics, particularly in white-collar roles previously insulated from such monitoring
Trends
AI-driven workplace surveillance and performance tracking becoming normalized in knowledge work sectorsBlurring of work-life boundaries through always-on AI assistant availability via chat platformsManagerial uncertainty and 'scattershot AI strategy' creating downstream anxiety and overwork among individual contributorsDe-skilling of entry-level roles through AI automation, threatening traditional career pipeline and junior employee trainingDivergence between macroeconomic productivity gains and individual worker experience of job intensification and stressAnthropomorphization of AI tools creating false equivalence between AI delegation and human management, obscuring actual job changesCost-cutting justification through AI adoption in stagnant or struggling companies, contrasting with growth-driven adoption in thriving firmsEmergence of monitoring-focused work roles as AI handles execution, shifting accountability burden to workers without corresponding authority
Companies
Boston Consulting Group
Co-authored Harvard Business Review research that coined the term 'AI brain fry' and surveyed 1,500 workers
University of California Riverside
Co-authored Harvard Business Review research on AI brain fry phenomenon with Boston Consulting Group
Harvard Business Review
Published the foundational research article on AI brain fry by BCG and UC Riverside researchers
Block
Cited as example of company cutting staff while publicly attributing productivity gains to AI capabilities
New York Times
Referenced as large news organization that transitioned from email to Slack workplace communication
People
John Herman
Guest expert discussing AI brain fry research, workplace automation, and implications for knowledge workers
Brittany Loos
Host of It's Been a Minute conducting interview on AI workplace impacts
Quotes
"mental fatigue that results from excessive use of interaction with and or oversight of AI tools beyond one's cognitive capacity"
Boston Consulting Group/University of California Riverside researchersEarly in episode
"ultimately all work boils down to a single question. Did I do this well or did I eff it up? And what AI assistants do is massively inflate the size of the this in question with a massive increase in the surface area of things one is responsible for having possibly effed up"
John Herman's startup friendMid-episode
"You have the stress of delegation of assigning tasks, of dividing these things up, you have that cognitive load, you may have more output, but it's yours. You also have potentially a bunch more liability, possible mistakes or the misjudgments that all accrues back to you."
John HermanMid-episode
"Everyone's going a little nuts and no one's going more nuts than managers. And that, I think, is trickling down into like kind of a frantic, intense and kind of scattershot AI strategy across a lot of organizations."
John HermanLate episode
"if you find yourself dramatically increasing in some metric the amount of work you do with AI tools, one thing you can expect is that your job is probably going to change"
John HermanLate episode
Full Transcript
This is our glass. On this American life, we tell stories about when things change. Like for this guy, David, his entire life took a sharp, unexpected and very unpleasant term. And it did take me a while to realize it's basically because the monkey pressed the button. That's right, because the monkey pressed the button. Surprising stories every week, wherever you get your podcasts. John, when was the last time you felt like your brain got fried? I'd say most mornings when we're trying to get our two children to school, I get like a full fry, like a pan fry on both sides. I could see that. I could see that. Personally, I feel like my brain gets fried constantly. And to be honest, I am doing it to myself by looking at Twitter. Sick stuff for real. I'm on there all the time. But there is some new research showing that some people are taking more psychic damage at work. Thanks to AI. In an article published in the Harvard Business Review by researchers at Boston Consulting Group and the University of California Riverside, these researchers coined the term AI brain fry to describe quote, mental fatigue that results from excessive use of interaction with and or oversight of AI tools beyond one's cognitive capacity. In other words, doing too much with AI. There's something kind of comically tragic about the idea that these tools that were meant to lighten our loads seem to be doing the opposite for some. But beyond the psychic damage, there's a lot in this brain fry idea that points to how we work with AI. Like with all the managing it needs is AI turning us all into bosses? And is this really the future of work? To get into all this, I'm joined by John Herman, tech columnist for New York Magazine. Welcome, John. Thanks for having me. Hello, hello. I'm Brittany Loos and you're listening to It's Been a Minute from NPR, a show about what's going on in culture and why it doesn't happen by accident. How would you describe AI brain fry? Yeah, I mean, the researchers, they describe this as basically hopping around between different tools and feeling overwhelmed, not by just having to multitask, which is already a problem in a lot of jobs, but by dealing with a whole bunch of output. So if you have a programming tool that can kind of run in the background and starts adding features to software really quickly, you have another tool that's constructing a report from you and searching the web and pulling together a market research document. You have another tool in the background that you're in a constant chat with trying to refine some idea for a talk you have to give. You're just kind of getting first pulled in all these different directions and then you're kind of spamming yourself. Like you're just producing all of this product and it's harder as you use more and more tools to keep track of whether this output is actually relevant to your job, whether you're doing anything that you need to be doing or whether you're kind of creating new work for yourself. And so the researchers described in this survey of nearly 1500 different people in different professions, this sensation of feeling kind of like, as they say at fried, or having like a brain fog, feeling kind of like mentally paralyzed by the amount of stuff that you have to keep track of and kind of check and monitor. I mean, it's interesting like what you're describing. It reminds me of like when I used to work at corporate job and I worked mostly in corporate environments for a lot of my twenties, like you're generating all this work content and you're sending it to your colleagues and you're not really sure what the point of it is and you like are constantly tackling back and forth from between different things and checking the status. It reminds me of like all the things that I really hated about that kind of job. I can't imagine adding a bunch of like different kind of bots to it that I had to kind of check in with. Yeah, because I already found that to be so implicit as it was because I was just like, what am I doing? Is anybody seeing this? What's the point? Why is this happening? Yeah, that's one thing that the researchers kind of get at where there are people who don't really experience this. Like this isn't something that everyone experiences when they get like a new, you know, AI tool at work. But people who do are first people who are using like a lot of tools at once. So this does really map onto like the experiences of a lot of programmers right now. Also strangely enough, marketing professionals reported this a lot. But they also found that it was much more prevalent in workplaces where like the roles were kind of poorly defined, where the like AI policies were vague and where managers were like, you know, you guys figure it out. And also we sort of vaguely expect you to be more productive. And that leads people to just like overdo it and say like, all right, well, I have this little thing that I can ask to do stuff. And I'm just going to ask it to do a whole bunch of stuff. And it's going to do a whole bunch of stuff and produce a whole bunch of like matter that looks like my work. And then I have to sort of check it. I have to figure out what to do with it. I have to pull it together, not just into like the actual work that I need to get done, but into my performance of being an employee, which is already like a source of stress in a lot of offices where most of your work is done on a computer. It's like, how do I make it known that I'm doing the job in the way that my managers want me to? Like this isn't just pure output in most cases, there's like a social element, there's a performance of hard work, there's all this funny stuff that gets amplified and sort of made more severe when suddenly everyone in the leadership of your company is like, oh, well, we should expect our employees to do 10% more work. Yeah. Yeah. I mean, I also want to note here though that like using some AI and mostly for repetitive tasks, it can reduce feelings of burnout according to the same research that we're discussing. But the problem is mostly when you use it, like you mentioned to handle a lot of different more complicated tasks. In your piece, you said a friend of yours who works as startup and manages AI in this way describes it like this quote, ultimately all work boils down to a single question. Did I do this well or did I eff it up? And what AI assistants do is massively inflate the size of the this in question with a massive increase in the surface area of things one is responsible for having possibly effed up. Like to your point, like if you're at a company or in an environment where they're like, oh, great, like AI tools mean that our employees can do 4,000% more stuff, perhaps has led a lot of managers or companies to, yeah, expect a wider range of this, like a broader, a bigger pot of this, whatever this is, there's just more output that's expected. Yeah. And with apologies to this friend, like that is this is someone who was like pretty intense about work, dives all the way into new stuff is working at an AI startup that is like he's writing software with AI to build AI tools like this is the maximum possible exposure to this and it does expose all these funny incentives that are like not so obvious until you make your whole work situation absurd. And it's like, oh, well, I'm generating five times more code than I was before. Is your software better? Are you contributing to the project of your company? And in any way related to that number, those become like much more urgent questions when you can just sort of become so prolific. If you worked in a place where your job involved making presentations, sending emails, compiling reports, like a sort of very generic, you know, office job at a generic sort of white collar workplace, like a life insurance company or something. What does it mean that you can do all of that faster and more? Does it mean that you're freeing yourself up for quote unquote, like more valuable tasks or more fulfilling tasks? Or does it just mean that you're going to be expected to do the work of two people? Does it sort of mean that the scope of your job is bigger? Does it mean that the scope of your job is smaller? Like these are all unresolved questions. And what's happening and I think something that's sort of like getting to people a little bit as they try to like keep up or not get left behind or whatever, is that their managers aren't sure. They're not sure their companies aren't sure. They're just diving into new tools that don't really have like useful norms around them that don't have like an abundance of like case studies and best practices and stuff like that. It's really kind of chaotic. And I think that's just causing like beyond the scope of this paper a sense of real like fatigue and anxiety in the workplace. We are going to take a quick break. But first, if any of you are finding it's been a minute for the first time, welcome. I hope you are enjoying the show and that you come back every Monday, Wednesday and Friday morning for brand new episodes and every Tuesday, a video episode. Tomorrow's video episode is a pop culture syllabus. All the things you need to know to truly understand our current moment. You can find that video on Spotify or YouTube or just listen to the audio wherever you get your podcasts. Coming up after the break, everyone's going a little nuts and no one's going more nuts than managers. And that I think is trickling down into like kind of a frantic, intense and kind of scattershot AI strategy. Stick around. I want to talk about something that I was really interested in like specifically interested in with your analysis. It's the realization that delegating to AI is kind of simulating management. Like AI is a poorly trained intern that you have to check the work of all the time. I wonder like is AI turning workers into bosses or at least simulated bosses? And if that's the case, what makes it different from managing humans? One of the weird things about AI as we've been talking about it for the last few years is that it's constantly sort of made into a character. It's anthropomorphized or personified or however you want to describe it. But what that does is it also screws up conversations like this a little bit or at least makes them like harder to untangle. So, shortcut to understanding this is like, yeah, managing a bunch of quote unquote agents, you know, a kind of a humanized term is kind of like managing people and that you're delegating a bunch of tasks in these different silos and then you're kind of checking the output. Like that does sound a little bit like management, but it is more of a simulation of management. And then the longer you do it, the more you realize it's definitely something else. You have the stress of delegation of assigning tasks, of dividing these things up, you have that cognitive load, you may have more output, but it's yours. You also have potentially a bunch more liability, possible mistakes or the misjudgments that all accrues back to you. And I think if, you know, you yourself have a manager, the way they see what you're doing is not that you've become a manager, they see that you should be able to produce more. You're not actually getting promoted, I guess is what I'm saying. And it's hard to tell if the early sensation of feeling like a manager, which kind of feels like upscaling, which is like, that's what you want, you know, I'm getting new skills, I'm getting new responsibilities, I'm sort of moving up in the economy as these changes happen. It's hard to tell if that's what's happening, or if in fact, you're being downscaled or de-skilled and your job is being sort of made simpler while it feels like you have all these amazing tools at your disposal. A lot of that is just automation, you're kind of just monitoring stuff. Now you have a smaller job, it's a monitoring job, it's stressful, you need to be vigilant all the time. You don't get credit for the work, you get credit for the mistakes. Like it's a weird new thing. The joke about AI is that, you know, everything is AI until it works, and then it just becomes software. It's just something you take for granted. Like there are all these different things that we use in our computers that at an earlier time in history, down to the most basic stuff, like the way they can do math or something, is like there was a time within living memory that the way that they did very basic stuff was like magic, and now it's not. And we're in this like process now where a lot of stuff is getting there. There are a lot of things that are kind of amazing to watch, like one of the newer models work on. But I think that in the context of a workplace, people in charge are quicker to take for granted that the stuff is done and possible and always works. And the people who are stuck kind of like managing it and doing it and trying to make it happen are the ones who are going to have to eat the gap. So much of this conversation feels like going into like the basement of sort of how psychologically work actually affects us and like pointing a flashlight down there and seeing what scatters. But I don't know, there's also like the chat of all of this, like chat means Slack and Teams and other apps that are instant messaging for work, like as opposed to slower forms of communication, like email. And now workers are speaking to AI and managing AI through similar chat mechanisms. What is the chat of all of this mean for workers? Yeah, so I'm reading this paper and it's interesting and intuitive and it all makes sense. Like you have all these new windows, you're switching between apps, all these things are moving down your screen really quickly. There's all this output. It's like, of course, you feel kind of fried. But it made me of previous reporting I had done on how many offices switched from workflows that were around meetings, emails, phone calls to chat, work apps like Slack or Microsoft Teams or whatever other workplace suite you're sort of stuck with in your computer job. And I worked at little media companies that were sort of run out of chat rooms early in my career. Later, I worked in big sort of news organizations that as they adopted these chat tools. And the whole idea was like, okay, you're always on, people always respond, it's quicker, it's more efficient. Obviously, it's better than email because it's just real time. It's like sending text messages. But what happens, if you aren't deliberate about this, is that it completely like blows up your workplace norms around when you're working, when you're not working. Yeah, that line gets so blurry. Yeah, and people lose decades of hard earned understanding and intuitions about what's appropriate, how you talk to someone based on the power dynamic at work. But you have all these norms. It's all sort of a little bit worked out. And of course, when email showed up, everyone had these same conversations. Slack shows up, chat shows up, and it's just like, oh, everything feels urgent. Everything feels immediate. And we have to like renegotiate all these relationships again. And people who are not used to that, I remember when I worked at the New York Times, they were on email workplace, they transitioned to Slack, I was used to it. But I remember a lot of coworkers complaining like the kids who were used to this are annoying. They're chatting too much. They're too visible. They're like performing their work in these group chat rooms. And like, what is this? Now that's just kind of part of the deal. 10 years later, all these offices have sort of moved over to this real time communication. It just sort of created this ambient sense of like, urgency, it's just another stressful thing about your job that you might get a message at any time. And you then have to like think about how you respond to it. It's just another boundary that sort of dissolved. And so one way to think about this is like, if you have all these new tools at work, and you're like, in conversation with software about all the stuff you're working on, that is now also talking to you potentially at weird hours. And it sounds absurd, like you can ignore it, whatever, it's just another app. But that's not really what I think people are experiencing. They're having new challenges to their workplace boundaries. It's like, well, I'm done with work. But also I could just, you know, send a quick prompt that might get something started for when I come back in the morning. Maybe I'll leave some stuff running overnight. And it's just sort of like another boundary that I would say most large companies aren't very good at helping their employees kind of enforce. It's not really in their interest, or at least they don't see it that way. And something that is sort of like left to individuals to navigate on their own. I don't know. I mean, it just makes me wonder, like, is this just white collar automation? Like, do we need to just get on board with this? Because it's the way of the future, kind of like robots were for assembly lines? Or, I don't know, is this something different? I don't think the answer is just like getting on board, no matter what. But there's certainly a lot of like pressure in that direction. I think the thing that stands out to me now, and that doesn't come up in this particular research, but does sort of come through a little bit, is that in a lot of jobs, the arrival of software automation, it corresponds with more surveillance and higher expectations, and more tracking and quantification and stuff like that. And so I do think if you find yourself dramatically increasing in some metric the amount of work you do with AI tools, one thing you can expect is that your job is probably going to change. With such a big jump with so many new tools, that a lot of companies are going to be emboldened to kind of track their employees in ways that white-color workers in many workplaces are probably not used to that. Yeah, they're not used to it. They will be sort of startled by it. And I think in some cases, we'll find themselves kind of understanding what it's like to work in a job where surveillance is taken as a given. A lot of the work that people seem to want AI to do, whether that is like repetitive tasks or whatever sort of like AI chatbot sort of stuff they would like to work with. I don't know. A lot of the work that they expect these AI tools to do is the kind of work that usually is or used to be done by entry-level employees, which makes me wonder if that's the case, how are tomorrow's higher-ups going to get trained up? Yeah, I mean, I feel like there are two ways that you have to think about this that kind of bring you to different places. One is this high-level big economic view where you're sort of asking, okay, if we have these new tools that can automate a bunch of knowledge work tasks, does that mean we have in the long term more jobs? And I think you can make a more comforting case at like the macroeconomic level that this is just like another transformation that the economy will kind of through turbulent times, figure this out that the economy may grow, that more productivity usually means more jobs and all that sort of stuff. At a personal level, obviously, there's going to be like major interruptions. People will lose jobs. People have lost jobs with AI cited as the reason. And if you think about the company you work at or maybe the company that you manage, it might be a little more zero sum than these like economists talk about, right? Like, okay, if you work at some wonderful productive fast-growing company where everyone's working really hard and contributing their individual skills and being compensated well for it and there's tons of demand for your product, well, you can probably make more of your product. Your company can sort of regroup around these new capabilities and just keep growing and there's more growth and everything's great. But lots of companies are kind of old, kind of stagnant. Maybe they haven't been doing well. They just want to cut costs. So there are going to be a lot of people who are eager to cut costs and to justify that with AI, to try to make people who are left fill those gaps with AI. And I think that that's one of the things that pushes people to the point of this brain fry phenomenon that we're talking about. The big part of it, a bigger part than a lot of people in the corporate world are willing to admit, is kind of narrative. Like, this is a story that everyone feels like they're a part of. They don't want to be left behind. They're watching companies like Block cut a bunch of people and say, oh, we're going to do this all with AI. They're probably even at this point reading friends of friends on LinkedIn talking about not getting stuck in the permanent underclass. Like, everyone's going a little nuts and no one's going more nuts than managers. And that, I think, is trickling down into like kind of a frantic, intense and kind of scattershot AI strategy across a lot of organizations. And the people who have to sort of deal with that are the most anxious of all in a different way. They're the people who are worried about their jobs. They're worried about making sure that they seem to be on board with this stuff. But they're also worried about this stuff sort of screwing up their livelihood. That's certainly a recipe for, let's say, mental distress at the workplace. Wow, John, I learned so much here. Thank you so much. Yeah, thanks for having me. That was John Herman, tech columnist for New York Magazine. This episode of It's Been a Minute was produced by Liam McBain. This episode was edited by Nina Pahtuk. Our supervising producer is Barton Girdwood. Our VP of programming is Yolanda Sanguini. All right, that's all for this episode of It's Been a Minute from NPR. I'm Brittany Loos. Talk soon.