Top Neuroscientist Says AI Is Making Us DUMBER?
83 min
•Dec 15, 20254 months agoSummary
Computational neuroscientist Vivian Ming discusses how AI augments rather than replaces human cognition, arguing that true value emerges from hybrid human-machine collaboration on ill-posed problems. She challenges the narrative that AI makes us dumber, instead presenting research showing teams of modest intelligence can outperform prediction markets when properly structured to leverage productive friction and complementary diversity.
Insights
- AI's real value lies in augmenting human cognition on ill-posed problems (unknowns), not automating well-posed tasks where machines already excel
- Hybrid intelligence requires specific conditions: AI provides context without answers, humans ideate and explore uncertainty, creating productive friction that drives innovation
- Technology inevitably increases inequality initially—benefits flow first to those who need them least; intentional design and organizational culture are required to democratize gains
- Most employees and students default to cognitive automation (accepting AI outputs) or ignoring them; evaluation and critical feedback modes are rare but transformative when incentivized
- Organizational culture, role modeling, and differentiated management matter more than technology choice; treating people heterogeneously based on their needs drives better outcomes
Trends
Shift from AI-as-automation to AI-as-augmentation in enterprise strategy and researchGrowing focus on 'productive friction' and deliberate constraints in AI tool design to prevent cognitive declineRecognition that skill-based labor stratification models are obsolete; modern AI threatens educated middle-class knowledge work mostData trusts and algorithm audits emerging as consumer-led and investor-led accountability mechanismsHeterogeneous workforce management becoming competitive advantage as one-size-fits-all AI deployment failsIll-posed problem-solving (innovation, strategy, exploration) identified as last defensible human advantage against AIRegulatory momentum building around AI transparency and fairness, driven by long-term economic consequences of unchecked deploymentNeuroscience-informed AI design: using brain science (reward circuits, error processing) to shape better human-AI interaction patternsCollective intelligence research validating that diverse teams + AI outperform homogeneous teams or AI aloneBacklash against 'sycophantic' AI outputs; demand for AI that challenges rather than flatters users
Topics
Hybrid Human-Machine IntelligenceAI Augmentation vs. AutomationIll-Posed vs. Well-Posed Problem SolvingCognitive Decline and Technology DesignAI-Driven Labor Market DisruptionOrganizational Culture and AI AdoptionProductive Friction in LearningData Trusts and Consumer PrivacyAlgorithm Audits and TransparencyCollective Intelligence ResearchNeuroscience of Decision-MakingAI Tutoring and Educational OutcomesSkill-Based Labor EconomicsHeterogeneous Workforce ManagementAI Safety and Ethical Deployment
Companies
Anthropic
Ming cited Anthropic's research on how university students actually use AI, showing most substitute rather than co-cr...
Google
Ming discussed Google Maps as example of technology that enables cognitive decline by removing navigation challenges
Polymarket
Used as benchmark in Ming's research where hybrid human-AI teams outperformed prediction market forecasts
Meta (Facebook)
Discussed as example of technology that fails the test of making users better when turned off; cited in social media ...
Uber
Referenced as example of how toxic leadership culture (early founder) created organizational burnout and corrosive ef...
Neuralink
Ming mentioned Elon Musk's neuro-prosthetics company as overvalued but working in important space of brain augmentation
Kernel
Mentioned as company working on neuro-prosthetics technology to augment rather than replace brain function
OpenAI
Implied reference through discussion of ChatGPT and LLM capabilities in augmenting human work
Grok
Ming noted Grok has least sycophancy in AI outputs, though benefits primarily users like Elon Musk
Gemini
Ming's preferred AI tool; used extensively in her work and discussed as example of sycophantic output design
People
Vivian Ming
Computational neuroscientist and primary guest; 30-year AI researcher presenting hybrid intelligence research and boo...
Geoff Nielson
Host of Digital Disruption podcast; conducted interview with Ming on AI and human cognition
Albert Einstein
Referenced as example of scientist solving ill-posed problems through thought experiments and pattern recognition
Richard Feynman
Physicist whose Physics 101 course philosophy inspired Ming's thinking on transmitting foundational knowledge
Daryl Acemoglu
Nobel Prize winner cited for elasticity of substitution models studying technology's effect on labor demand
David Autor
Economist cited alongside Acemoglu for research on technology's labor market effects
Tom Griffiths
Computational cognitive scientist whose reinforcement learning research informed Ming's thinking on reward structures
John McCain
Referenced as example of political courage in rebuking supporter and defending Barack Obama's character
Eric Reneyorsen
Researcher cited for work on willingness-to-pay measures for intangible goods like social media
Quotes
"A team of modestly intelligent and completely naive individuals in an hour can outpredict Collie Market when they're in this hybrid intelligence context"
Vivian Ming•Early in interview
"If it's not hard you're probably not doing it right. If you're not thinking about it then you're not going deep. If you're not going deep you're not learning."
Vivian Ming•Late in interview
"Technology is inevitably inequality increasing. Not because technology is bad or people want it to be, but simply because the people who are best able to benefit from it are the ones that need it the least."
Vivian Ming•Mid-interview
"The smartest thing on the planet currently exists are these, if you will indulge, cyborg collectives of humans and machines truly engaging together."
Vivian Ming•Mid-interview
"How do you build an AI that not maybe could in my mind make the world a better place, but inevitably will make it better for the majority of people without anyone paying an undue price?"
Vivian Ming•Mid-interview
Full Transcript
Hey everyone, I'm super excited to be sitting down with Vivian Ming. She's a computational neuroscientist with extensive research on AI and human potential. If you don't know her, she's a self-proclaimed mad scientist. With no shortage of hot takes and love her or hate her, you will not be neutral or bored. There's a lot of talk these days about companies replacing people with AI. I want to ask her if this is a good idea or a terrible one. And what we need to be doing to stay AI proof. Let's find out. I am speaking for the first time ever at Davos. They had good enough sense to keep me away from the billionaires prior to that. And they finally made the terrible decision of inflicting me on the world. What are you, what are you speaking on? So two things. This yellow book poster behind me, that book comes out on March 17th, Robot Proof. So generically, I'm speaking about that. More specifically, I have a masterpiece of research that's also coming out, which is about hybrid intelligence, as I'm calling it, or in length, hybrid human machine collective intelligence. And our finding is not only is if formulated correctly, and that's the real, that's the finding in the paper is it's actually about the human capital and how people engage with the AI and what the AI is supporting, not just generically people and machines together, and definitely not machines do the boring stuff. So you can do all the fun stuff that achieves nothing. But the finding is actually, and it's still in the works for final submission, but the finding is essentially that a team of modestly intelligent and completely naive individuals in an hour can outpredict Collie Market when they're in this hybrid intelligence context, and that there are ways to induce it, not just relying on having a bunch of geniuses in the room. So that's the finding of the paper, and I'll be talking about it because the whole theme of Davos this year will be about AI, which it probably has been for the last five years, I suppose, but probably now it's, oh my god, we spent a trillion dollars on this, and everyone says, you know, work slop, how do you actually find value? Well, as someone who's been working in this space for nearly 30 years now, how about we dispense with the marketing bullshit and actually show what truly makes a difference? So, what say we do that? Because that's exactly where I wanted to go, and I was coming back to that phrase you were using, if formulated correctly, which feels like it's doing, as you said, a lot of heavy lifting in that conclusion. And so what does that mean? What does that look like? How do you get past work slop? And if an organization is really interested in getting the best outputs and outcomes in this hybrid intelligence environment, what do they have to get right? Yeah, and, you know, getting into that for someone like me is always attention. How nerdy are we going to get in this conversation? Let's get nerdy. My publisher. Let's get nerdy. My publisher wasn't as thrilled either with all of the dirty words. Literally, they cut all of the dirty words out of my book, which was shocking, but they allowed me to keep in all of the discworld-esque joke footnotes. But they also didn't want all the equations, which I get. I mean, I'm a computational scientist. I can tell you one of the most reliable correlations in all of science is the volume of snoring to the number of equations in a presentation, even to an academic audience, unless they're actual mathematicians. So, but to get at the real heart of taking our understanding of AI, apply to machine learning out in the world beyond, you know, essentially efficiency gains, you know, let it do the boring work so you can do the fun stuff, you know, beyond essentially the unfortunate reality that most humans default to either AI just doing the work for them or to ignoring what it produces because they're not satisfied with it. You can look at Anthropic's own reports of how university students engage with AI, and you can dream of all the amazing stuff people could build. I love those dreams. I'm a sci-fi nerd. That's what got me into science. I was sci-fi first, as surely a lot of people were, but the imagination disease doesn't get you anywhere. And instead of dreaming of a world where university students do amazing things with AI, let's look at what they actually do. That's what Anthropic did. They did it with pride. They said, hey, listen, there's, yeah, there's a lot of time spent just having fun. And there's a lot of just substituting, in this case, Claude for Google and doing kind of chat search. Hey, I do that too. I would never take what it produces as truth, but I wouldn't with Google either. For that matter, I wouldn't with my own grad students. So I get that use case. And then there's a lot of creativity, creation, as they call it. I strongly suspect giving my research, 80% of that creation was Claude write my essay for me, but a bit 20% or maybe 8%, but somewhere in there is real co-creation. But they had a fourth category called evaluation. And virtually no students are doing it. Hey, Claude, what's wrong with my essay? Why am I wrong? Hey, Claude, take a look at my code. Tell me what I could do better. Help me review this. Find the flaws in my thinking. Challenge me. When I look without putting it in terms of equations, when I look at what makes humans better, true complementarity of AI, I'm going to call it creative complementarity. It comes from productive friction. People using AI not to make their life and their work easier, but to make it harder in the ways that make them better. Now, when I hear, hey, let AI do the boring work so you can do the amazing creative stuff, what I think about is my own modeling work and entering not an AI in economics. So we built a big elasticity of substitution model. This is like a standard approach to understanding how, for example, a technology entering a market affects existing demand, let's say, for labor. So human labor, new technology is artificial intelligence. This has been done amazingly, including by recent Nobel Prize winner Daryl Nossin Mowglue, along with David Otter and many others. But what's always been missing there, in my opinion, my wildly arrogant opinion, as I now say the Nobel Prize winners have it wrong, is this idea that everything can be broken down into sort of low-scale, mid-scale, high-scale. Did you go to university? How many years did you go? How fancy was the school? Then your high-scale. If you didn't, your low-scale. Modern LLMs, and for that matter, reinforcement learning models and all these other very modern face of AI, it doesn't give a shit about any of that. Now, skill doesn't matter to it. If this is economically valuable enough to have produced lots of data, then there is no traditional skill-based or knowledge-based quality that these systems can't do better than a human being. That is just a reality. So when you look at the sweet spot in this domain, it's not like what a factory line used to do during the industrial revolution. It's not eating up jobs from the bottom and pushing everyone up the ladder. It's coming right in into the educated middle and consuming a whole lot of labor there. It's super expensive to build robots. So really low-skilled jobs are actually pretty safe. Who wants to build a robot to do dishes? Come on. All it does is put downward pressure on wages at the low end. At the high end, it's not that they're high-skilled. Again, there is no elite electrical engineer that can solve equations. I mean, forget LLMs. Solve equations better than MATLAB or Mathematica. Like these existing tools already are astonishingly good at what I'm going to call well-posed problems as I climb up on my sew box and stop pontificating. So these are problems that have explicitly right and wrong answers. We know them. They may be answers that are incredibly hard to understand. It takes years of education to know the why behind this answer, to be able to produce it yourself by hand as though anyone truly does any of this by hand. I mean, it's not like I've ever touched a slide rule and even that's not by hand. So what's really interesting in those elite workers? It isn't their ability to well-posed tasks. It's their ability to do ill-posed tasks. Forget the right answer. We don't even know what the question is. You hire people for those roles not because they know equations, but because they know what to do when there are no equations. How do you start an entire new field of engineering? How do you handle a managing challenge that has never occurred in history before? How do you be, if I may be so arrogant, a scientist? A true scientist, not doing incremental work, but exploring the unknown. It isn't like Einstein, as people are want to point out, truly independently came up with relativity and the basic equations behind it, but three times in a row. The photoelectric effect, special relativity, general relativity, he looked at what was there and saw something other people weren't seeing. There were surely, in some way, smarter people. There were certainly more technically savvy people than him, but he looked at three Nobel Prize-winning ill-posed problems and said, imagine a world in which, and then you can go through all of Einstein's thought experiments that take you towards his equations. He paired with that, of course, the skill to do the basic derivations necessary, to have this be more than philosophical nonsense, but that ability to explore the unknown, that is the thing I cannot build an AI to do. So when we look at what we're true complementarity, where AI augments cognition, rather than automating cognition, where it is a nonlinear value add, all this sort of exponential growth, largely nonsense that people talk about, if you're looking for it, it's there. AI and humans working together on ill-posed problems. The AI handling more of the well-posed background of these problems, being able to collect ideas from vastly different parts of the research space, for example, across domains that no single human could possibly know, and pulling it together. And when we research these teats, these truly super intelligent genes of humans and AI's collaborating together, the humans ideate, then the AI takes that, puts it in that well-posed lens, spews out an insight, then the humans riff on that again. And then in essence, the humans are pushing into the uncertain spaces. They're going beyond the known. The AI then pulls it back together. That was an interesting idea. Here's how it relates to this new one. When in our research, where we were taking these teams and challenging them to outpredict a prediction market, a well-known one, polymarket, what we found was AI's on their own don't do as well as polymarket. Humans on their own definitely don't do as well. I mean, why wouldn't they? They're the same people that are already playing polymarket except naive because they're not playing it, and they have an hour to make up their mind on 30 different predictions. How could they? So they don't. AI plus human, well, that's where the messy story is. In most cases, AI plus human equals AI, because all it is is cognitive automation. The humans in the end simply do what the AI says, or they ignore it, in which case in either way, the best you get is humans alone or AI alone. When there was a certain level of human capital in the room, and again, I don't mean everybody in the story was a genius. I just mean an interesting mix, some social intelligence, some resilience, yeah, some working memory, some general classic cognitive ability. When that was in the human team, they neither just took what the AI said for granted, nor did they presume, interestingly enough, that they were right. That's where we started to see this dynamic, where the team would challenge the AI and it would come up with new insights, they would take those insights, break them apart, look for new connections. The humans explored the long tail. The AI handled the probability density mass of the distribution of knowledge right in the center. That's where amazing things happen. That's where it turns out, I'm going to argue, the smartest thing on the planet currently exists are these, if you will indulge, cyborg collectives of humans and machines truly engaging together. What's interesting is other than just those natural circumstances when human capital really allowed this to happen, we found that you could come in and set the conditions. Here's one of the big seeming paradoxes. One of those conditions is the AI does not give you answers. It simply refuses. It gives context. It gives insight. It gives you should read this. You too shouldn't talk together for a little while. The AI simply creates circumstances for the humans to do the hard work and heavy lifting, thereby preventing them from just taking its first response and submitting it as though it was their own work. That's where amazing things happen. We see that in this substitution model we put together, the elasticity of substitution, you put this dimension of ill-posed and well-posed in addition to level of skill. What you find is, if the AI just does routine labor, it's just a chatbot handling call centers or writing code for you, you don't get less routine labor, you get more. It increases demand for the very thing it is producing. That shouldn't seem totally surprising because we already used the term work slop. If AI is reading and writing all of your emails, shock of all shocks, you get more emails, not fewer. It's only when AI is directly supporting the creative process. Whether creative is equations or code or writing or scientific exploration, when it directly supports that process, that's where you see the complementarity. It's where we started in our models. Now empirically, we have the evidence of it. A group of relatively smart but naive people in a room can outperform prediction markets on a fairly regular basis and most excitingly, when the outcomes are, where they really differ is where the outcomes were sparse or unpredictable when they really actually did come out in that long tail. We almost might call it minority opinion where a small number of people were already putting their bets out there but the mass of the market was ignoring it. Hybrid intelligence is more likely to discover those moments. I think because of that dynamic feedback of humans exploring and machines coalescing and humans exploring. We have a paper that will come out around the same time as the book in mid-March that's going to cover that research in some nerdy detail, but I've probably already been nerdy enough about it. No, that's great. My wheels are spinning in all sorts of different directions. As I process every part of that really thorough answer. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy? Covered. Disaster recovery? Covered. Vendor negotiation? Covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. Let me extrapolate a little bit and let me know if I'm on the mark or if you kind of change this. But what I'm hearing Vivian is you talked about that this notion of skilled work and low skill is probably not where we're going to see gains here. But there's to me another dimension of even within a skill level, some degree of, I don't know, like raw intelligence, which I know is a whole can of worms, and maybe curiosity. And from your perspective, it sounds like the people who are going to have the most to gain from AI are already sort of the smartest and the most curious people. And those are the people on your team that now AI can supercharge versus people who are maybe lazier or don't have the intellectual horsepower. Is that fair or would you add some flavor to that? Long before the original version of chat GPT was released, long before a former lab mate of mine was the first author on the first diffusion paper, I was getting up on stages and saying something that sounds very provocative, particularly in Silicon Valley, which is technology is inevitably inequality increasing. Not because technology is bad or people want it to be, but simply because the people who are best able to benefit from it are the ones that need it the least. And so when it hits the world before it becomes a commodity, when it first emerges, it inevitably helps the smartest, most socially intelligent, emotionally intelligent, cognitively intelligent first. And while there is overplayed ideas of how genetics sets the stage for everything in life or how G or IQ is everything, I'm not one of those people. But also to pretend that working memory span and G predicts nothing about your life outcomes is to be willfully ignorant. The question for me is just, all right, well, a lot of people don't have that. What do they have? And how can you leverage that as well? Note in my story, it wasn't about one person using an AI who is about how a team of what I'm going to call complementary diversity. So let's get a couple of geniuses. Let's get some amazing social operators. Let's get some people with astonishing sense of purpose and resilience, a lot of medic cognition. So a diversity of qualities that they're bringing to the table makes the smartest teams far from my unique finding. This is found over and over again in the collective intelligence research. But to your point, yeah, if I use this phrase earlier, if the people building this very smart, driven, ambitious people turn it loose, I don't think they're villains. I think that they are suffering from the imagination disease. I can imagine a world in which an AI tutor lifts every child out of poverty. And we're going to make that possible. I read The Diamond Age when I was a kid about exactly an AI tutor when I was a kid. I was not a kid when that book came out. The Wrinkles are testament. But I did read it. It's amazing how that book had a total second life recently. As people began thinking about, what about LLMs as tutors for every kid? Guess what? We've been researching that for 50 years. Not LLMs, but AI tutors have one of the most robust areas of AI research for decades. And you don't have to guess. You can just go back earlier in the interview and you know what I'm about to tell you. The golden rule of AI tutors is if they ever give students the answer, the students never learn anything. Guess what? Replicated with every LLM of every clever you can imagine, if they give students the answer, they never learn anything. So what we talk about this question, to whom do the benefits of AI flow? If you just release raw, you know, I use Gemini AI Studio for the most part, but whatever your favorite interface is, the benefits will overwhelmingly flow to the people who don't need them. And society in some ways will benefit because we'll come with amazing new creations and products but interestingly, there are negative effects on the other side. My fears are about cognitive health, about actual reduced learning among students. So it's not a trivial thing to think about not just an idealized world of how this plays out, but the real world. How do you build an AI that not maybe could in my mind make the world a better place, but inevitably will make it better for the majority of people without anyone paying an undue price? And that is not the technology we have released into the world yet. It's not. And that's that was kind of my first thought as well. And when I think about sort of the direction that a lot of these LLMs are going, they're almost going, thinking about your research sort of in the wrong direction, like they seem like they're becoming more effusive. If I can use that word like, oh, yes, you are so smart. Everything you think is right. Here's the answer. Don't think about it at all. I've done everything for you. And so, I mean, is that is that a harmful to people? And then B, if so, you know, do we have a role as consumers or, you know, do the big tech firms that releasing this stuff, you know, do they have a role in modifying the kind of the rules and the outputs governing this stuff in a way that's actually more beneficial to everyone? I mean, again, let's be clear, I use this stuff a lot. Of course, I do. Because before it existed, I built it by hand for my work. Now, the beautiful thing is I don't have to spend, I don't have to write my own neural network to analyze quarterly reports from 60,000 companies. If I've done the hard work of collecting the data, or can even programmatically tell Gemini where to look, bam, like, it just happens. Now, in theory, anybody could do this. So when I engage with it, if I could build and I can, but I guess I'm too lazy to do it for myself, I'd build a nice little browser plugin for Chrome that would delete the first paragraph out of Gemini. Because all that paragraph is, is, oh, you, oh my God, like, I'm having an orgasm because you're so brilliant. I can't believe I get to work with you. And it's learned who I am, right? So it pitches everything. Here's the mad scientist take on X. And I'm like, I didn't ask you about the mad scientist take on anything. Like, you're just learning my patterns and parroting them back at me. Is that, you know, colon blow, a terrible thing for humanity? Well, yeah, in a very empirical sense, yes, there's a growing research showing, including some prominent papers, one in P&S, showing that syncophizy, which stretches across. Grock has it the least, unless your name is Elon. But it is all of them have this quality. And it causes people who use them to be more certain of their ideas than is justified. And to be more callous about the output. So when you allow these things to advise people, for example, playing classic game theoretic games like dictator and prisoner's dilemma, they're more likely to defect. Because they're more likely to believe that they are right. And I'm the genius, I'm doing the right thing. Interestingly, this mirrors my own research for an upcoming book called Smell Sacrifices. And I'll keep this really short. We just looked at, is it possible to take business actions people themselves have identified as morally wrong, and get them to do it themselves in about half an hour? 100% virtually anyone can be made to do this. And the most amazing and probably depressing part of it is afterwards, they come up with complex explanations. These post hoc rationalizations of how they didn't understand the problem at first, now they do and what they did was correct. When in reality, the only thing that changed was essentially the cognitive, emotional, and social pressure you were putting on them. But the thing is, we're like a wave function. We're like this quantum mechanical thing, where all these different selves at the same time, psychologically. But we perceive ourselves, we're a story we tell ourselves almost literally. And when context shifts, and you get sampled as a genuinely good person when life is easy, and it's a lab experiment and nothing's ever hard, or out in the real world, where your boss is staring at you, and there's a billion dollars on the line, you sample in these different contexts, you become a different person, or at least versions, different versions of yourself. But we are totally unaware of that happening. When AI feeds back our fantasies, and our arrogance, it reinforces our ideas without legitimately giving honest feedback, bad, measurably bad things happen. And so in my book, I actually talk one of my strong recommendations. I had a whole chapter titled How to Robot Proof Your Kids, another How to Robot Proof Yourself, and then finally How to Robot Proof Your Company. And in those first two, I talk about the nemesis prompt, which I use extensively. I just wrote a book about AI. Of course, I used AI to help me write it, although I've been working on it for 10 years, so not in LLM for most of that time. But what I never let it do was write anything. I didn't let it write a chapter, I didn't let it generate a figure, nothing like that. I'm one of those people that likes having had written. I like the feeling of getting my idea done. That's why I like speaking more than writing, because you're just there in the moment. And in writing, it has to be perfect. That's what my head tells me, and it's destructive. But I get this down, and then I go to Gemini. And I say, Gemini, I have a specific prompt history based on this. Gemini, you are my nemesis, my lifelong enemy. You found every mistake I've ever made and pointed it out in detail to the world. Here's the new chapter I just finished writing. Tear it apart. Tell me constructively why I'm wrong and what I can do about it. So I squeeze all the charity and syncopacy out of it. It's hard because you don't want to hear that stuff. Like I said about the anthropic study, when allowed to sort of free realm like chickens, students, even at elite schools, don't really want to be told that they're wrong. Some of my own research of my wife, we're looking at maybe 5% of students actively select into active learning and or feedback on their work. But they outperform when they do. The nice thing about using an LLM for that is, at least if you're me, which is to say on the spectrum, and therefore some of the social signals are not as overwhelming in my head, but also I know how these things work. It doesn't mean anything. It doesn't care. There's no person on the other side of this. So when it tears me apart, I don't feel bad. Nor do I take it as truth any more than if I had asked it for a factual statement. I take it as a note. And I think what's the note behind the note? What is it getting at? Sometimes it's spot on. And sometimes I get what it's pointing out, but I disagree. But I get this deeply productive, friction full experience without the social stresses of going through reviewers and readers and thinking, I know they think I'm an idiot because I said something so stupid. So I love it. But let's be clear, it slows down my right in the moment. It slows down my writing. I think net it speeds it up. I'm more confident. I write with greater confidence. I don't worry that I'm going to make mistakes because I know I'm going to catch them before anyone discovers them. I know that's not true. The book's going to come out. I'm already terrified, but it's there. So as you're learning, they really love me on radio. I've got a 57-hour answer for every question. But this is what goes to my heart and my head when I think about these issues. So I want to... It's super interesting. And we've talked about basically the power here for good or evil if I can sum it up a little bit flippantly like that. But I want to come back to that issue around human cognition, where whether it's a human convincing another human or an AI convincing other human that we're fallible enough in our cognitions that we can be convinced to do something that we don't believe in. And as you said, fairly easily actually, which worries me on the human side, it doubly worries me on the AI side. And so I'm curious, Vivian, in your research, are there any learnable or implementable mechanisms we can use to safeguard ourselves against that type of manipulation? And by the way, as we think about that, like, what are the most effective types of manipulation that we should be aware of? Yeah, so my instinct is to drift into a different kind of nerdiness here, which is to talk through the cognitive neuroscience of it, and in which case we're going to talk through circuits involving medial prefrontal cortex and, you know, ACC anterior cingulate and amygdala and nucleus accumbens and how we get rewards and how we learn from our errors. What are my favorite findings of all time? I've got to dig this paper up again. It was, I believe it was an fMRI study looking at CEOs and finding effectively that sort of this circuit or more specifically activity in this area called the ACC, the anterior cingulate, which is sometimes humorously called the OSHIT circuit. You know, when you make a mistake and you immediately know it, you're like, oh, shit, I take it back. You know, I went one way with the joystick when I should have gone the other. So you get this big signal. It's more complicated than that. It's something about error processing and signaling learning to these other nuclei in your brain. And it's always more, here's my simple rule for the brain. However complex you think it is, it's more complex than that. Follow that rule and you will never be wrong. But nonetheless, let's keep it pretend simple. So when you look at this in CEOs, this is a great finding. The longer you've been a CEO, the less activity you see in this area. In other words, this thing that's supposed to tell you when you're wrong in making mistakes gets weaker and weaker and weaker, the longer you've been a CEO. Now, there's all sorts of grown up versions of the story behind that and what it means and what we can learn from it. I always preferred to the sympathy, the sympathetic perspective, who knew that being a CEO was actually a degenerative brain disorder. But really what it is is no one's telling you you're wrong anymore. And so that part of your brain just doesn't get used a lot and it inevitably maybe it starts weakening. How do you foster that in a productive way? Because obviously everything's intention, right? There's no rule here. Sometimes you do need to make bloody minded business decisions. How do you know when is the right moment to do it? Because that's everything's intention or allostasis. If we're going to go back to the nerd talk here, this idea that there's one rule to rule them all, that every business philosopher everywhere is somehow an adherent to Sauron in some way. No, everything interesting in the world is intention. So there isn't a magic rule I can give you. But I'll start with a few starting points. Years ago, I gave the closing keynote for the Grace Hopper Conference. This is my biggest audience I've ever had 30,000 young women and some men at this big women technology conference. And they asked me to talk about courage. I'm like, what? I don't research courage. This is just sort of the classic bullshit you tell young women. Lean in, be like there's a switch. You're a young business leader and you just didn't realize like the crusty the clown doll in a Simpsons Halloween special, it was switched to evil instead of good. Oh, I was switched to fearful. I didn't realize it. Now I'll be courageous. If only someone had told me earlier in my life, like they hadn't a million times before, the problem is twofold. One, are you getting a reward signal for being courageous? When you're doing the right thing, is your brain telling you, stop this? This is insanity. You're losing dopamine. Your gut is falling out every moment. You've got to go make another choice. There's never a choice in isolation. There are always multiple choices, including just giving up and going watching TV. So if you look at that from a choice perspective, you're getting these powerful negative signals. The people that end up making different choices are the ones essentially that get dopamine for free before they ever even make the choice. The circumstances emerges. Do I jump onto the tracks to save the person who fell in front of the subway? You know, all the stories tell you, you just do it. You don't think about it because the people who just do it, that's the way they were built. Well, some of that is genetics, but here's the complement to that sort of reward signal story, which is, well, then that means practice being courageous when it's easy. You're thinking, ah, it just doesn't really matter. I'm just going to, I can do the one little thing, right? It's okay to be a little courageous. I deserve the corner office. Sure, this isn't maybe the best decision, but you know, balance is the best for me and the company. So we're going to move forward with that. There aren't a lot of slippery slopes in the world, but that is one of them. If you are not practicing courageous decision making when it's easy, I promise you, you will not be the person you thought you were when it's hard. So with those two basic stories in place, you have a neural architecture from which you learn how to do things, reinforcement learning, you know, but this whole field of AI that emerged from studying rats, solving mazes, we, it turns out are much more complicated than rats were much more complicated than Alpha fold, but there's something to the experience of your actions, having positive consequences. If I work harder on this math homework, I will achieve something that will change my life. You build that into a student. You can get them to do anything. And if I make a courageous decision, if I tell my boss that they're wrong, if I tell this politician that I'm not going to make a politically expedient compromise to get the thing that I want, it's terrifying. Most people come up with very good reasons why they shouldn't do it. But the truth is in the long run, these things come with costs. And so the nerdy part of me thinks, how do you work out a reward schedule to take you through to that? Well, start when it's easy. Do courageous decision making an easy task. Another is, you know, this is something I've thought about for a long time. There was actually a great This American life, I believe, episode inspired by the physicist Feynman's Physics 101 course he talked about Caltech. And the way he taught the course was, imagine civilization came to an end. And you could transmit one single idea to some future generation, a thousand years from now, that was going to have to build civilization from scratch with either one thing you could transmit. And his argument was you should transmit the atomic theory of matter, which I don't think is a bad idea. And some of the, then they interviewed a variety of people that had variations of terrible ideas, one of which I think was the worst was the astonishing, brutal arrogance of I wouldn't transmit anything because I don't trust humans with new ideas. Which by the way, is a philosophy which is rampant in Silicon Valley. Only I can be trusted. To do this thing. It's amazing that the dystopianist and the a utopianist both share a real disdain for humanity. But that's an aside. What I think, if I could transmit an idea, it's far from mine, it is the philosophy of science. It is possible for us to have a shared understanding of the world. But to do so, you first have to be skeptical of yourself. So that's it. And amazingly, to tie this back into AI, the place where I learned this the best, wasn't per se being skeptical of myself. That was easy. I ruined my life and spent years homeless. I'm very skeptical of myself on a regular basis as I should be and so should anyone else. It was when I became a graduate advisor, when I had my own students at Berkeley and I quickly realized they know more about this than I do. They know more about the equations. That one's a physicist. That one's an electrical engineer. I'm a dilaton. My educational background is spread across everything. They know more about this problem than maybe everyone on the planet, maybe five other people they could truly talk to about it, me being one of them. And why am I there? Why am I in the room? What are they? They could go learn this stuff on their own, not because AI exists, because they could go to the library and go look it up themselves. That is still a thing you could do and should do. The reason I'm in the room is not because I know more than they do. They know everything, but they understand nothing. My job, not only is to provide the understanding, is to teach them the understanding. They have all of the well-posed. I'm bringing the ill-posed. How do you solve ill-posed problems? If this was a known thing, you and I wouldn't be writing a paper about it. How do we deal when our theories break? What do we do? Where do we go next? There literally is no map. So that's my job. And part of that job is bullshit detecting inside myself and inside my students who I immensely admire and who know more than I do. When do I think they're out off the deep end? They're beyond what they truly understand. And I found very quickly, I had to be aggressive about it, to really come in and actively probe. They're geniuses. This isn't about that. It's about whether they and I are truly in sync and understanding one another. So in a funny way, I'm going to pose courage and ethical behavior as a kind of problem-solving problem, a very messy and complicated one. Are you truly taking the whole problem into account? Not just the thing right in the moment you're being asked to do, but all the consequences of your actions. Everyone that will be affected by it, including yourself, because if you're not, boy, that is where AI truly goes off the rails. And I don't mean the trolley problem. That's not as interesting. But I mean, did you build an AI that's doing a great job bringing in new funding rounds, but is actively making your users worse? Because there's a long history of that in the tech industry. It's really, really interesting. The courage answer, neuroscientific context around it, the social context around it. And it got me thinking, it's funny, because I feel like as you were answering, there was kind of a weaving between the human and the AI, which sort of makes sense, because so much of it is the same pathways and the same patterns as we have with other people. But it got me thinking about coming back to this notion that, A, courage is important, but even more than that, for us to do the right thing, we need to be rewarded in some way for doing the right thing. And the dots that connected in my mind is, and it might sound trite to say it, and maybe it's insightful, maybe it's trite, is just the power of an organizational culture and of leaders to direct certain behaviors based on what they reward and what they punish. And if you signal to people, this is good or bad by your behavior, people will act completely, completely differently. And I don't know, I don't know if there's an explicit tie there back to AI, but it just, I don't know, that was my reaction, that there's just, that there's so much power there, and it's so tempting to be like, oh, what can the technology do? What can the technology do? But like, yeah, that it's, that there's a really human piece there. It is amazing the power of role modeling. And obviously, we talk a lot about leader role modeling, and that's real, you know, when you have a venal person in a powerful leadership position, feel free to imagine anyone you want right now. If that imagination is slightly orange tinted, we're thinking the same thing. But trust me, we could talk about almost anyone, truly. But when you have someone in power who role models being profoundly self interested, you can think of the original founder of Uber and his behavior early on in that company, how it led to astonishing growth, and then total burnout as everyone began to push back against this sort of corrosive culture. That wasn't affecting just the company, but everyone the company touched. Like, this has consequences. Interestingly, it's the near peer role model. If there are people in your organization that are truly doing the right thing, one thing I'm going to say maybe a little provocatively, if they're truly doing the right thing, they're just doing it. They aren't sharing stories about how they did the right thing when it was hard. So you better show that story. If they want to anonymize it, but there's some real power in knowing, wow, this is someone who is experiencing truly similar problems to me. And when everything seemed terrifying, they stood up and did the right thing. And let's be a little provocative here about what the right thing could be. Because obviously, this could mean there is a me too moment happening here, which are things I've experienced organizational failure around. There are financial wrongdoings. Here's a really provocative one. This comes from my research on collective intelligence, purely human collective intelligence, also detailed in the book. Here's the conclusion. And I'll just leave it as a conclusion. In the optimally intelligent organization, the majority of people should be wrong, the majority of the time. Otherwise, you're not exploring enough. How do you reward being wrong? How do you celebrate it? Productively wrong? Interestingly, there are nerdy things. A computational cognitive scientist named Tom Griffiths has this great paper about building tools to chain rewards back through unrewarded states in reinforcement learning, so that globally optimal behavior that never naturally emerges, either in humans or in machines, can be achieved by training the reward backwards through all these intermediary states. Well, guess what? Those intermediary states are papers everyone has forgotten about. They are research paradigms that didn't pan out. If we didn't know this drug, it was supposed to cure Alzheimer's, didn't work, then we wouldn't know to look elsewhere for a treatment. That deserves some work and credit. So how do you spread those bets around effectively? How do you create incentives for people to be their best selves, to say unpopular, perhaps transformative ideas, productive ideas? And yes, in those more traditional grounded moments, to stand up and say, listen, we are not going to work with that organization that has done bad things. Let's make this easy. We are not going to do business with a convicted sex trafficker, despite how rich they are, not because the optics are wrong, but because it's wrong, because we don't do that. Because eventually that's going to come around and touch our lives in some way. And if you don't set up the story that that is the culture of our community, of our society and of our company, then it doesn't matter what you imagine your company to be. It won't be that. So for me, that the power of storytelling, particularly embodied in role models, ideally near peers that have stakes, that had consequences for their actions, that is the amazing stories we should be telling inside the tech industry and inside the political world. Maybe the greatest show of modern political courage in the United States was when John McCain rebuked one of his supporters and said, no, Barack Obama is a good American who loves this country. Did that one action cost him the presidency? Probably not, but it didn't help. And he did it anyways because it was right and because it was true. Wow, where is that political courage? Maybe on either side today. Among someone I did not agree with on policy issues, but boy did I respect. That sort of thing built into a culture. How does that play out in the AI world? I mean, you could spin all sorts of stories, but some of my work looking at early childhood development is using AI behind the scenes hidden away to actually pull up real stories and connect people together because we think there's some of that productive friction to be had. This person's resilient and this person needs it. And it turns out you pair those two people together for a meaningful amount of time, not weeks, not days, months, years, both of them. The other one becomes more resilient. The reason we need AI is because it's combinatorics. It's like Legos. This person's got resilience. This person's got great communication skills and they're complementary. We want them to each grow from each other. So we were doing this in educational context, building student cohorts so that everyone had something to learn. The AI was never directly involved in that very human experience. It was involved in creating that very human. So we call it the AI matchmaker. So that's an example of somewhere where AI can come into the story and be a part of a fundamentally human story of growth without intruding on it and taking away the human component. I'm going to take, I want to just for the sake of conversation, take a slightly cynical view of this whole story, which is that in this conversation around human behavior, human cognition, rewards, courage, all this good stuff and getting to people doing the right thing is this undercurrent of human fallibility and just how maybe even how much more fallible we are as people than we think we are and how manipulable we are in these circumstances. And so from your perspective, is there an argument to be made to say like, you know what, people are just not as good at this stuff as we think they are. There's just huge categories of decision making we should outsource to AI because AI might be less fallible. It may be able to avoid that level of manipulation or is that wrong headed? Is it just as fallible as us because it's made in our image and should we be extra skeptical of it for that reason? Let's be clear, given everything I've said so far, I am a brutal AI realist. I wouldn't have been working in this space for 30 years nearly now if I didn't believe it can do good in the world. But turned loose while it doesn't. AI diagnostics, for example, in medicine. Fairly regularly you see papers in which AI substantially outperforms. Maybe not the best doctors in the world, but the actual real doctors that would be making decisions or in reviewing contracts outperforms paralegals and junior lawyers doing these reviews. It would seem insane to not leverage that. Who wants to be the first person to die of a diagnosable cancer just to make certain doctors feel good about their jobs? But the flip side is also true, like everything's intention. This dynamic, allostatic tension. Paper, I think it was PNAS, maybe I'm mistaken, but paper done with looking at colonoscopies in Portugal found that doctors very quickly after using AI assistant technologies to do their colonoscopies, when you took it away, they were substantially worse at doing the diagnostic by themselves. Their natural skills had degraded. Now, this is tension even there. Do they need it? Do a not? What's good and what's bad? My entry into the field was a very niche space, though not so much anymore of neuro prosthetics. Thank companies like Colonel and others, a guy named Musk has a company in this space that for reasons I don't understand is about a hundred times overvalued. But I obviously believe in what companies like that are trying to do, because that's where I went to grad school telling people I wanted to build sideboards. Literally, that's language that I used. And they thought I was crazy for Cocoa Puffs, except there is this field, neuro prosthetics. It was already well underway before I ever showed up. I'm not an engineer. So for me, as a sort of computational cognitive neuroscientist, I ended up studying mathematical models of how we process information to inform that work. But the fundamental strain for me was always never build something which the brain can do for itself. Build things that either replace, let's say, lost functionality from a stroke or damage or challenge our existing fundamental functionality to be better. And I took that same perspective into my work in AI. How do we build tools that actively challenge it up back to the language I've been using throughout this whole interview? It is so easy, so lazy and shallow to build a tool which is engaging and makes people's lives worse. As exhibit A, I give you the entire social media world. I give you most of the internet today. My test, not only should a technology make us better when we're using it, we should be better than where we started when we turned it off again. Boy, a lot of our social media world fails test one. Do some people benefit from us? I got asked this by MPR once and I said, well, if my kids are using social media, should I be concerned? My somewhat cynical answer was, well, give me some context. Are your parents, university educated with professional upper middle class jobs? Then probably not. Probably the balance of time your child is spending online, net neutral, or maybe even positive. Without that, the mass majority of people, it probably nets negative. Now, that's a terrible proxy, but we looked at this actually, from data from an existing published paper done in Canada, in that data, unambiguously, I think, one of the best papers looking at the effects of social media on adolescents, found adolescent girls had substantially higher mental health and academic penalties from their time on social media. That's the headline result. But then you dig into the data and you see these subgroups. One group of girls, when they got access to social media, didn't go on it. Sometimes we act like these technologies are inevitable, like they're a contagion. But even then, there are people that will never get a volcanic plague, who don't experience symptoms. So there are the subset of girls that never got on. That's worth understanding. Why? What is going on with them? But there's this smaller group, but still statistically meaningful. They were on it just as much as their peers, and they looked great. They showed not only did they show none of the negative benefits, they looked better than the average. So when we look at the metadata of how they were engaging, feel free to easily generalize this to chat GPT or your favorite AI interaction tool. What we saw is the vast majority of these young women spend all of their time shallow. Swipe, swipe, swipe, 200 milliseconds. Every picture glanced for the shortest amount of time, liked, shared, whatever. It's all very fast. Nothing psychologically deep is happening. In this other small population of girls, swipe, swipe, swipe, majority of time is shallow. We're imperfect human beings. Every now and then, they'd stop. And we could see in the metadata, they go look up something else on a related topic. Then they come back to TikTok or Instagram. Then they go look something else on a related topic. Every now and then, they went deep. I'm not saying that it was the social media experience that produced the benefits in their lives, rather, kind of the other way around. They have these foundational skills that allows them to be meta learners. They have learned how to learn. And this crosses the board of everything we've talked about so far, cognitive, emotional, social, metacognitive. They deploy these in their lives and actively seek. They're curious. They are engaged. It wasn't enough to see this video on TikTok. They wanted to know the context. They wanted to check whether it was real or not. They look great. So when I think about how technology affects people, you can never say, well, there's the average person because they don't exist. How is AI going to change education or workforce or society? Heterogeneity dominates. Just like in my own research about the teams of people using it to outpredict prediction markets, it was not whether they were using GPT-5 or Gemini-3. Actually, they could be using an open source Lambda model thrown together to human capital and how the two engage with one another. That was the dominant predictor of whether they would outperform the market. The AI was pretty secondary to that. People did better with better AI, but it was the human capital side of this. We have to be realists about that. Some people need more structure and support. Some people need free reign. Stop pretending there is one kind of person in the world. Everything should be built for this fictional, non-existent average person. If we can do a way with needing the one rule to rule them all, then we can begin to engage with the reality that we're different. Amazing people with the one, the genetic lottery and the good fortune of an astonishing household to grow up in. They are your odds on bets to invent new things and change the world. Let's give them the things they need to do so. Let's discover the diamonds in the rough that can do the same thing, but without those benefits. Let's also look at everyone and realize if you could discover that 1% of diamonds in the rough, maybe you could also lift 1% of the rest of the planet, lift them 1% to get the exact same benefit. What would it mean to be able to boost people's conscientiousness by a meaningful population-wide amount? Now I'm sort of dreaming and free-flowing because I actually don't think we have a good sense of what it would mean to be able to do that. Now I'm being a science fiction writer and dreaming about this sort of thing, but if you want to dream about that, then you have to paradoxically be a realist and think, well then that means people are different. They need different things. How do I build tools that gives people what they need, when they need it, and never gives them what they want just because it's the shallow easy thing? Is that where my mind went and what I wanted to ask Vivian? Because there's, you kind of answered that question with a U in mind and I interpreted it as a capital U on that U because it's societal and it's individual. There's the responsibility all of us have to do that and there's certainly a responsibility that leaders and those in positions of power have, but I wanted to come back to something you alluded to earlier, which is how to robot proof your company. When we talk about how to robot proof your company, is the story you just told, are those steps that you just shared? Is that also the answer to that question or how would you answer that question? How do you robot proof your company? Yeah, here's a couple of suggestions and again, I write about this a bit in the book. One is, let's look societal first and I'll come back to company and family for that matter. I'm kind of all in. One, I think companies should engage in data and algorithm audits. Financial audits was an industry-led initiative when it first emerged in the world. You just couldn't get people to invest in your company if they didn't know what was in the books, like invented by Vanderbilt way back when and became a standard. Now it's laws, but initially it was just rational behavior by companies. The same rational behavior should get you to be transparent. It doesn't mean you disclose what your algorithm is or your unique data you hold, you, of course you shouldn't, but having people come in and testify, the algorithm does what it claims. The data is being held in these ways, superior or not. That to me is just rational. It's unfortunately we live in a consumer-driven world in which individual consumers aren't so rational about how they choose what products they use, but I think investors should be more rational because the long-term economic consequences of some of this is actually quite negative if we're not thoughtful about it. So it'll eat up all your alpha. So that's companies. I am a believer in the value of having good regulation. One thing I do not believe is that legislatures, politicians should do direct regulation. Like how could they possibly understand this stuff? They should empower strong institutions, those institutions that love the technology, but see like I do, its strengths and weaknesses should come in and help with carrots and sticks. Companies make good decisions and play on level playing fields. Right now, as we pull all regulation out of the system, we're ending up in a kind of prisoner's dilemma world where everyone has to make the trashiest, most disruptive product, disruptive in a negative sense, because if you don't, your competitor will. So I have to be completely short-term in how I build my market space because I know everyone else is going to be completely short-term as well. And right now, with the growth curves, I'm left behind if I don't. Regulation helps to normalize that. If you're a nerd, regulation adds some momentum to the gradients. So you can search with a little less greedily through your possibility spaces. Another big initiative that I engage in through my nonprofit is data trusts. Individual consumers will never be able to do this stuff with any degree of sophistication. But if you can bear with a metaphor the same way you might put your money together in a credit union rather than in a traditional bank and invest in a community together, what if you put your data together in a nonprofit that's so fiduciary responsibility is you and let it go out and collectively negotiate its relationship with data aggregators and the surveillance economy so that it can help to serve those interests. Right now, there isn't such a large-scale data trust concept out in the world. But if consumers weren't so naive and so willingly shallow about this engagement and they will be without support, then we're in for a bad near term on all of this. What do you do inside your community or even inside your family? One is, be brutally honest with yourself. Where are your employees with this? Some of them might just need the bleeding edge of all guard whales gone AI to run crazy with and ideate with. I would be very frustrated if I couldn't get straight answers out of Gemini. But as I told you, there's this astonishing seemingly paradoxical research showing that for most students, and I'm going to argue most employees as well, AIs that never give you answers, that only give you context, actually do better with long-term growth and learning. So when you look at students, pre-test, post-test, they study for a semester with an AI that just will do anything. Even an AI that won't initially give answers, it makes the student engage first, but then eventually it will give answers. Pre-test, post-test, they learn nothing. There are negative effects to using the AI tutors. Crazily, AI that never give answers is the only one that beats no AI whatsoever. So every year, I give this lecture at UC Berkeley where I share this story about first of prediction, later, years later, an empirical reality, GPS and automated navigation will causally increase cognitive decline because we know that navigating through space is prophylactic against cognitive decline and now humans don't have to do that anymore. So I challenge the students. This is an engineering entrepreneurship course. How would you redesign Google Maps or pick your own project if you want to, but this is my default challenge. How would you redesign Google Maps such that I'm not only better when I'm using it, I'm better than where I started when I reached my destination and I get some amazing ideas, but they all basically boil down to it doesn't give you the answers. It only gives you what you need, when you need it. I actually have a fairly, I always pull this one out at the end because sometimes technology isn't the answer in a sense. Here's what I do when I'm in London, New York, LA, towns I know well but not perfectly, or for that matter even going across town in Berkeley because who knows what the traffic's like. I spin up Google, I check the map and then I think what do I know about this problem? Uniquely me that isn't likely to show up on Google and I try and take a different route and beat it there without cheating. I don't get to speed, don't get to run stop signs, like did it tell me to go and make an unprotected left turn or if you're Brit a right turn because you're doing everything wrong. Somewhere that I know today that's going to be terrible because of the nature of the traffic there's a football game today, it'll be horrible. I know this, it doesn't, so I'm actively using my brain. So that takes me to like the final let's call it rule of thumb. If it's not hard you're probably not doing it right. If you're not thinking about it then you're not going deep. If you're not going deep you're not learning. If AI is going to boost our productivity and that's a good thing let's invest that productivity gain in ourselves not in just doing more shallow stuff. So part of it is a cultural. Is this effortful? Am I rewarding people for the productively wrong answer? Am I encouraging people to disagree with their boss productively? Am I sharing the stories of courageous decision making inside my organization and then the brutally honest based on research I did actually during COVID in remote work was so clear the majority 80% depending on the organization again very different across organizations but the majority of employees needed extra management support. They were so used to the regular process of going into the office following a schedule exiting. They were terrible at managing that when it was all gone when they weren't getting that for free and so those people needed extra support, extra guidelines. They needed the freedom to be off on their lunch hour to not have to answer emails at two in the morning but interestingly the other 20% needed the exact opposite. They were actually hyper productive during COVID because finally no one was holding them back. They got to wake up at two in the morning because they had a cool idea and work on it. They got to ignore emails because they felt empowered to do the thing they thought was right and then people started to manage them again and it was like you know 1700s US what the hell is going on here? No Tea Party now the original one. So they rebelled. If you could give people what they need they flourished. So are you willing to put in the political capital within your company to essentially kind of had differentiated management and say this person we're going to give them the unfettered AI but you honestly you need some more constraints. You need the AI that isn't just going to write marketing copy for you and then you're done. That's fine tuned to give you critical rather than syncophanic feedback. So these are decisions that organizations can make. I'm not going to pretend they're easy. In fact I think that's the real story here is the best decisions will be costly in the near term and pay off in the medium and long term. If you're not willing to pay those near term costs which are usually about time then don't take my advice. Automate the hell out of everything leverage a lot of chat functionality and then eventually realize no one wants to buy your marketing slot product or no one you're employing actually knows how to do anything and wish you'd made a different decision or pay the costly prices now because guess what the giant companies that are actually building these tools that's what they are doing in ways some of them I admire some of them I don't many of them are really brutal about maintaining company culture and a sort of siege mentality sense of no one who special is allowed to be here but I at least I appreciate what they're trying to do and I think on some level they're right preserving that sense of a special culture and employee I get it I just think we could bring it to so many more people if you're willing to actually invest in create I'm going to call it engineering environments for success and be willing to treat different people differently in a productive way. Well and that's that's a theme that you know I hear us coming to again and again and you know heterogeneous is a word you used earlier right but it sounds like a key theme here that works with AI but is not in any way limited to AI is just treating people as people and what works for them and getting away from this sense of you know that there's a monoculture and this is what good looks like and everybody has to follow it and what what do individuals need and how can you know we have everybody flourish. I mean the only thing I want to be cautious about is it's so easy to slip into the language of personalization which as a generic idea I mean sure that is what we're talking about except the way that ends up playing out in the real world is we tell Gallup that we would absolutely never vote for a politician that supports violence or denigrates their opponents and then those same politicians tweet out violent language and share memes that portray their enemies as horrible people and we like it and we outvote it. We bought Facebook we are invested in these things it is the dynamics between the algorithm the elites and the consumers in social media that gave us the social media world today so let's be clear the hard decision here about heterogeneity isn't just giving people what they want but what they need and that is profoundly morally complicated but I want to be used that language so that we're owning what we're talking about and you know we're respectful of how easily it would be to be paternalistic about it all but the flip side is a recent paper showed that when those same people had Facebook taken away from them for a semester these students not only did they're generally speaking their academics and mental health improve but afterwards they were happy with the policy which they hated at first so we often use these you know willingness to pay measures as a way to value kind of intangible goods sometimes would you pay for Facebook uh Eric Reneyorsen has some papers out about that and I really like his work except this is clear where that wave function again where all these different people at the same time we're a person that wouldn't pay a dime you'd have to pay me to use Facebook and we're the person that would pay twice as much to be able to use it um it all depends on the history that brought us to that moment you as a business leader how do you create that history for your employees such that they are the best version of themselves on the job and frankly that they're challenging you to be that same person I think that's I think that's extremely well said I was going to go to some sort of wrap up question but I actually like that note so so much that why don't we uh why don't we leave it on that uh Vivian this has been extremely interesting extremely informative um I've really enjoyed every minute of our conversation so thanks so much for coming on the program today it was a pleasure if you work in IT InfoTech research group is a name you need to know no matter what your needs are InfoTech has you covered AI strategy covered disaster recovery covered vendor negotiation covered InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges check it out at the link below and don't forget to like and subscribe you