Expert Intelligence with Paul Estes

Connecting Mission to Action When AI Changes Everything with Diana Wu David

31 min
Jan 13, 20263 months ago
Listen to Episode
Summary

Diana Wu David, a top 10 global futurist, discusses how AI is fundamentally changing work and organizational management. She argues that the real risk isn't moving too fast with AI but moving too slow, and that companies must redesign work around outcomes rather than simply layering AI tools on top of existing processes. The conversation explores how trust, mission clarity, and speed are critical for organizations and individuals navigating AI adoption.

Insights
  • Large enterprises struggle to realize AI ROI because they lack the organizational agility to redesign workflows at scale, while smaller companies and individuals can adapt faster and see returns more quickly
  • The critical gap in AI adoption is not technology capability but organizational willingness to fundamentally rethink work design and outcomes, not just add AI tools on top of existing processes
  • Trust in institutions is declining (95% of workers see AI value but don't trust organizations to ensure positive outcomes), which drives secret AI adoption and hidden productivity gains among employees
  • Managers' roles are evolving from traditional P&L management to orchestrating synchronization between human teams and AI agents, making orchestration skills increasingly valuable
  • Individuals and organizations that ground decisions in clear mission and values can navigate AI adoption more effectively than those making reactive, fear-driven technology choices
Trends
Secret AI adoption: employees using AI tools outside official channels due to lack of organizational trust and fear of replacementAI adoption faster in remote work and smaller organizations than in large enterprises with legacy systems and processesGenerative AI expected to integrate into economy in 3 years vs. 10 years for previous general-purpose technologiesManager role evolution toward AI agent orchestration and outcome-based leadership rather than headcount-based empire buildingEarly-career employees increasingly managing AI agent teams, blurring lines between individual contributor and management rolesWork redesign becoming critical success factor for enterprise AI ROI, not technology implementation aloneShift from input-based metrics (meetings, hours) to outcome-based performance measurement driving organizational changeIncreased urgency and busyness among technology workers despite AI productivity promises, due to capability absorption gapJob crafting and work redesign happening in partnership with employees at forward-thinking companiesDeclining trust in institutions affecting how employees engage with organizational AI initiatives and change management
Companies
IBM
Criticized for explicitly announcing workforce replacement through AI rather than upleveling employees
Salesforce
Mentioned as having similar narrative to IBM regarding AI-driven workforce replacement
Shopify
Highlighted as example of company communicating clear principles for aggressive AI adoption with employee involvement
Duolingo
Cited as company openly encouraging employees to use AI technology with clear organizational principles
McKinsey
Referenced for research showing ROI from AI comes from work redesign, not just technology implementation
Sequoia
Cited for research on AI ROI and work redesign requirements for enterprise value creation
Microsoft
Mentioned as company where employees are expected to use AI and report being busier than ever
Google
Referenced as AI-first company where employees using AI report increased workload and busyness
Accenture
Conducted workforce survey showing 95% of workers see AI value but don't trust organizations on outcomes
Stanford
Nicholas Bloom's research on AI adoption patterns in remote work and enterprise vs. individual usage
People
Diana Wu David
Top 10 global futurist and author discussing AI's impact on work, organizational change, and future of management
Paul Estes
Podcast host and interviewer exploring AI adoption, management evolution, and organizational change with Diana
Nicholas Bloom
Stanford researcher whose work on AI adoption shows faster integration timeline and usage patterns by organization size
David Brooks
Author referenced for quote about data-informed storytelling in The Social Animal
John Maynard Keynes
Economist referenced for prediction about leisure class and work reduction that hasn't materialized
Quotes
"The real risk isn't moving too fast with AI, it's moving too slow."
Diana Wu DavidEarly in episode
"What are you optimizing for? Not what your boss wants, not what the board expects. What outcome actually matters?"
Paul EstesClosing segment
"95% of workers surveyed saw value in working with generative AI but they didn't trust organizations to ensure positive outcomes for everyone, i.e. them."
Diana Wu DavidMid-episode
"Suddenly a manager is not just managing the P&L and the project and the people. Their status may come from how much compute they're allocated, how many agents they manage."
Diana Wu DavidMid-episode
"You can't redesign the way you work if you don't know what the work is actually for."
Paul EstesClosing segment
Full Transcript
Suddenly a manager is not just managing the P&L and the project and the people. Their status may come from how much compute they're allocated, how many agents they manage, how well they do at managing the synchronization between the humans they manage and the agents they manage for the ultimate outcome. And I would say that because there's so much happening with agents doing some of the work, that the premium on orchestrating all of that has gone up. We're all trying to figure out whether AI is a tool we control or a force that will control us. We're moving fast, breaking things, and wonder if we're building a future or just automating ourselves to a relevance. Today, we're talking to someone who's been studying what's next for years. Diana Wu David is a top 10 global futurist who's an author and advises Fortune 500 companies on what's next. Diana has a provocative take. The real risk isn't moving too fast with AI, it's moving too slow. She believes the interface between humans and work is fundamentally changing and organizations that will have a chance won't be the most cautious, though they'll be the ones that learn to recalibrate at speed. Today we're digging into what it means when AI becomes the UI of work and how trust might need to move faster than your risk and compliance department is comfortable with. Diana, welcome to the show. Thank you so much, Paul. And that is such a great way to talk about my current obsession around speed and trust in AI that you've just written the forward to my book. Thank you. Now I just have to write the book. Well, speaking of books, I want to go back to 2019 when you published Future Proof. I think it's more relevant today than it probably was in 2019 in a lot of different ways. One of the themes of the book, or maybe the exciting incident, as you'd say in the book, is when you had a friend that committed suicide and you start to realize like, hey, something was broken or there's something different. Take me to that realization or what you learned in that moment. And then we can get into AI in the future. It felt like a very real moment. Well, maybe this is indicative of the conversation around speed as well, because there is an inciting incident of somebody who's been working heads down, postponing the possibilities of doing things differently. And while I'd love to say that my friend's suicide was this crazy moment that changed my whole life, it was actually a couple of years before I changed my behavior. And I feel like that is something that we do. I mean, right now, when you're talking about Future Proof the book and how it's just coming to fruition. We're talking about AI. Even with my friend's death, I really thought, okay, that's terrible. I'm just going to put my head down and keep working. And it really was a couple years later where it shifted. And I thought, you know, there's really more to life than putting your head down and just trying to grind through it. And really then it was that and a couple of other things where I thought, what am I optimizing for? Or what am I really focused on? And it was, you know, I remember even more viscerally sitting with the budget spreadsheet for the next year thinking, if I spent this much time planning my own life and my family and my relationships and all the things that presumably on my deathbed will matter, I would probably have the best life ever. And instead, I'm sitting here saying, these are the goals I'm going to hit. And, you know, we always overachieve on our goals, etc. So it was a bit of that recalibration that was a wake up call to me to just look at things differently, which is something that I continue to do. It's interesting. I had the same sort of experience. I was sitting in an all hands meeting. And it was my third all hands meeting in two days because we had the division all hands meeting. And then we had the VP all hands meeting. And then we had the GM all hands meeting. And I was like, it's like that famous quote, like, is this as good as it gets? Well, David Brooks has a great quote in The Social Animal, which is data informed storytelling, where somebody goes to Davos and gets from the fringe to the inner, inner, inner, inner, And they finally go to the very inner circle and they go, huh, this is the most boring part. I had the same experience when I was an executive chief of staff and everybody just assumed that the higher you get, the more genius strategies and things like if I could only get in that room, I would understand and it would be enlightening. It all makes sense. You get in the room and it's even more confusing than you thought it was before. I want to start talking a little bit about AI because that's your current focus. There's a trend about AI where people are hiding it. People are using AI, and you wrote about like secret cyborgs. People are using AI and not really telling you about it. It's kind of like Ozempic, what's going on. It's like, oh, you look great. Oh, yeah, I've been exercising. They bundle them now, actually, Ozempic and AI. But there are these new technologies that are coming out that transform people, whether it's physically or with productivity or intellect. and people are afraid to talk about them. What are you seeing with that trend, and what would you tell people that are using AI and experimenting with it, and they're finding it to be beneficial? And they're hiding it, those people? I was just today, in fact, talking about Nicholas Bloom's Stanford research. He just published about a couple of trends. One is that, just as you've said, They've looked at it and people are using AI more outside of the office and he focuses on remote work so he's sort of saying they will use it more at home and in remote work than they do in the office. That people who are not in large enterprises are using it more and also that the estimation based on the research is that whereas most general purpose technologies like the steam engine are going to take 10 years to kind of get fully baked into the economy. that they expect this might be three years. And generally speaking, he's a researcher that goes, well, these are just the facts, right? He's not trying to sell AI into business. So I thought that was interesting for a couple of reasons. One is that in order for larger companies to get value out of AI, most of the research from McKinsey and others and Sequoia posits that the return on investment comes from work redesign. And so I think that the problem is scale and size. Like in order to redesign a workflow at a company with tens of thousands of people, there's multiple people involved. And you have to really think, you know, how do we redesign this work completely differently? Do we want to have three all-hands meetings in two days? And if we don't, then we have to change it. And then there's like 100 people that have to go in different directions. And so it's taking a lot of time for large enterprises to get that return from redesigning workflows. Whereas for a smaller company or even an individual, you can get that progress quickly. And if you're a team of five, then you can use your kind of interpersonal skills while you do everything that you need to in a startup say or a small business to plug those gaps And so you can almost realize return I think more easily So while the big enterprises are doing this big, massive redesign that's taking a long time, a lot of people are thinking, oh my gosh, I can't wait that long. I need to get rolling now because this is all moving so fast. If I don't hop on now, there's no way that I can hop on. So I think individuals are using it for coding or marketing or whatever. And to my personal experience, it's quite uneven in terms of people saying it. So you still get the wow factor from not telling anybody you're using AI, but all of a sudden saying, well, I just built this website this morning that talks about, like my colleague, that details the quantum dashboard that one might have, you know, if we went into this project. And you have people who are AI savvy going, wow, you know, I didn't know that you could do that. Or I didn't think that some people were like, I only thought I could do that. I didn't know that anybody could do that. One of our engineers said like, I mean, now you're coding with Claude. Like they're kind of like, what's next? When you talk about change, and let's talk about like personal and organizational change. A lot of my work was done helping people understand change when it came to freelancers. There's a lot of analogies that you can draw between engaging with a freelancer and also AI. The first is around replacement. The thing that I noticed the most is that when I was excited, when I found there was somebody who could do something better than me, I got excited because my mind went to, oh, well, I can do, you know, kind of that trope of, oh, I'll go do more valuable things. Well, I'll go and do more of things. It now allows me some time to go in and focus and make those things better, like improve quality of the things I was doing versus the rote task. When you think of change management, especially at big companies with all of the dynamic changes and nobody feeling safe, how do you get people to understand or get a mindset that unlearning and relearning and really becoming curious is the only way out? Well, I think that there are a couple of different issues. One of them is trust and there's trust in people and there's trust in institutions and that has been declining. So last year Accenture did a workforce and worker survey and they saw that 95% of workers surveyed saw value in working with generative AI but they didn't trust organizations to to ensure positive outcomes for everyone, i.e. them. So to me, that is one of the statistics that's defining the way that we are grappling with AI adoption, secret or otherwise. You know, people are saying, why should I tell you that I've, you know, automated a certain piece of my job because I don't trust you to allow me to do four hours of more valuable work. I think maybe you'll replace me at the end of the day. That's what we're saying. I mean, there's another, a whole nother stream of, you know, the tech going faster than an organization or human ability to catch up or understand how to use it, which I do think is another part of the capability absorption gap. But I think trust is huge. And the companies that tend to do well are the ones that really lean into the idea of communicating clearly what's going to change, what their thought processes around AI, even if it's initial and may change, communicating their intentions. You have people who are saying, my intention is to get rid of everybody. I mean, those people, either they're going to hide AI or they're going to be the ones that are like at the very forefront of it and in the institution. You know, that just bifurcates people in terms of how they will show up at work, I think. I mean, there's not particular research, but those companies who are saying, you know, this is something we want to do to up-level everyone, to really lean into those skills, and to redesign the work in partnership with our employees. And that's another trend that, you know, is happening where there's much more job crafting within an organization. We've seen some people do it, I would argue, wrong, like IBM, who literally came out and said, hey, we're replacing people. And Salesforce, I think, sort of has the same narrative. And then others that have come out, Shopify, DeLingo, and said, hey, these are kind of the principles by which we're going to be aggressive with this technology and how we expect everyone to openly go in and start using this technology. Who's getting it right right now? Oh, gosh. I think a lot of the ones that have the easiest time are some of the AI native startups that are going in without any legacy and saying, during COVID, I used to say, if we invented work right now, what would it look like? And to a certain extent, they are having the opportunity to do that, to sort of come together and say, okay, what are we going to do? We don't have any legacy software. We don't have any legacy processes. We're going to start from the outcome, and we're going to figure out what we want to work back in terms of activities, in terms of technology to support those activities, all in service of outcomes. And I think that that's the difficulty with bigger companies because they have a lot of installed users and installed processes that are not necessarily fit for the outcomes they want. They also have a lot of installed managers. I want to talk about the middle of companies. Okay. One of the things when you're working in corporate America that you try to climb the proverbial ladder. And right when you climb that ladder, you get a bigger title and maybe you get some more money and you get more people that you're managing and leading. How does being a manager, that concept change over the past couple of years? And even as you look in your future crystal ball, like if I'm a manager listening right now saying, hey, I used to manage a team. Now I feel like I'm managing technology and strategy and all of these things that are maybe new or maybe even more acute. Oh, at the same time, I don't feel safe because every time I open my algorithmic news machine, it tells me that managers are being laid off at pick a company. What does being a manager mean these days? It's a really interesting question because it's, I would say it's of a piece with the, you know, the sort of broken pipeline around early in careers, hires. And before any of this happened, I remember being in a meeting with the sort of next gen leaders and these consulting partners at one of the big consulting companies and a couple of middle managers and they really were squeezed. You know, they were the big idea partners who were saying, oh, we got to do this new initiative and we've planned the whole thing without any input from, you know, from the people it's going to affect. And I think it was around remote work, if I remember correctly. And then the next gen folks were like, actually, that's not what we want. and the middle manager people were like, we're trying to get to be partner, but we have to deal with all these people with big ideas and a bunch of people who don't agree with them and like somehow bridge that and also get all the work done, make sure all the work gets done. And so it has always been thus but I think the interesting trend now is a you know headcount There always been that trend towards empire building and you can even see it in sales where you're categorizing your clients in terms of how many headcount they have, in terms of how much time you want to spend on them. I think that that is an interesting change whereby people, if you're thinking about outcomes, can have the same outcome with less headcount. You know, it's a trend that didn't happen with AI. It's been a trend over time. So suddenly a manager is not just managing the P&L and the, you know, project and the people, but they may be, their status may come from how much compute they're allocated, how many agents they manage, how well they do at managing the synchronization between the humans they manage and the agents they manage for the ultimate outcome. And I would say that because there's so much happening with agents doing some of the work, that the premium on orchestrating all of that, which middle managers have always done, has gone up. It's more valuable. Now, the kind of crimp in that is that there's increasingly conversations around how early and career hires will become more like middle managers. so they'll be able to not just do all the grunt work but actually manage a team of agents to do grunt work and they will start acting more like middle managers we talk a lot about reskilling right now that my job is not only kind of working with other humans it's managing technology you're almost becoming an it manager in a lot of ways like the unsexy part of creating agents and managing agents while it sounds super interesting down in the weeds it's a really technical thing to manage and then add in humans and organizations and all of that. And the promise is that we're all going to save time. But if I talk to anyone, I have a lot of friends that work in AI-first startups and even at companies like Microsoft and Google where you're expected to start using AI, they're busier than they've ever been. They look back five years in their career and they've never been busier because everyone using AI is just creating more. Sometimes valuable, many times not. Like the amount of slop that they have to muddle through to get basic things now done, the noise and the signal is a real issue. How does the ability for us, everyone to be able to create more, work with the idea that this thing, this new technology is supposed to save us time? Well, the idea behind leisure, I mean, the economist Keynes said that we would be famously far better off and part of the leisure class by now. And that was, what, 100 years ago, more? I'm still waiting for that. Yeah, I think that, you know, to a certain extent, we're gluttons for punishment in terms of wanting always to feel like we matter. And I think that there is this element, you know, when you think about work and the kind of identity it gives us and status and income and dignity, that that plays into it. We're not just working to get something done. And anybody who says that, you know, has never been in a company that and played corporate politics before. I do think right now, the reason people in particularly in technology are so busy is there's a sense of urgency that it's coming and the early adopters, that the sort of benefits will accrue to early adopters. So we have this window of opportunity and we need to go forward. I also think that because you can hypothetically be more productive with AI, that the basic relative value of your work to your leisure has changed. So suddenly you taking an hour off of work is more expensive because you could be doing more with that hour with AI. So we're kind of in that part of the, I would like to say, journey towards a time when we spend more time on leisure. And part of it is just a shift from input. You know, we have this many meetings and therefore we've done our job to outcomes. And the interesting thing about that is it's a managerial shift. It is a shift partly informed by the technology. You have AI that is doing intent-based, outcome-based, goal-based processing. We were talking the other day about frameworks, and especially the Eisenhower decision matrix, which I think is relevant given everything that's going on, because I think people are trying to find simple ways to make sense of this. How do you think of that decision matrix in just helping people navigate where to focus? The biggest question that I ever get asked is how do you keep up with information that is relevant to your job? In particular, because as a futurist, you like to get the broadest possible amount of information to make interesting connections that nobody else sees. but I do use frameworks less around decision making and more on information and then we can move to decisions and that is really related thinking about what is it that I'm working on now that I want the information for so often it would be around a client would be looking into web3 sports and entertainment for instance so for that time I could just stop thinking about driverless cars. Maybe they were not interesting. I knew that there were big, you know, leaps and bounds and batteries, but we would just focus on that for a little while. And I would try to narrow down all of my information or not necessarily narrow down, but think how does it relate to that one aspect or do themes even in my own learning about like maybe this year's AI, but then also, you know sustainability and other key themes where the information is going a bit faster or the trends are going faster. So that's what I use for the information filtering. In the terms of the decision making it's really difficult because as the speed of things has increased or perhaps our perception of the speed depending on what industry you're in it feels like the decision making has to increase. So all the things you're producing are here and then you have the frontier here and in the past you could just kind of go along producing your widgets and thinking maybe I'll check in on the frontier. All of a sudden that's like compressing and the things that happen on the frontier are becoming part of the product cycle or the services cycle so much faster that all of a sudden you do have to make potentially decisions at speed. For most companies, I think getting a fundamental idea of why they're in business for the customer is a grounding aspect. And for people, it's like why they do what they do. What are their values? What do they want to achieve? It may sound quite simple, but going back to that and say, okay, I want to make work better for people. I mean, that's sort of my goofy mission, to make work better for people, to have a better future for the most possible people. And I find that that's helpful in decision-making beyond also I have to contribute to revenue, et cetera, et cetera. You say it's simple. I think the hardest thing for organizations as well as people to define is why Why you show up or why the company exists There was a number of companies whose mission value statements are so far from how people show up that that dissonance of like, why am I doing this again? I can't relate to like, why am I missing my child's soccer thing or their birthday? Because, oh wait, it doesn't connect. And even in their own careers, I think people struggle to make decisions because they don't have a grounding North Star or story they tell themselves that helps sort of ground that decision. Yeah, I think that that is true. And, you know, culture is such a huge aspect of companies. One of the things that I found after writing Future Proof, which was all about careers and individuals grappling with acceleration, I did a program for people and it does have people ground in their values to think about then what's the story we're telling ourselves about why we're doing this and how does that inform how we go forward. And the interesting thing I found is that a lot of the people who were executives in that course then said, oh, this is amazing. I used it in my company to create a kind of common why. And back to the middle manager, I used to think when I was a middle manager that that was basically my job, like connect the mission of the company to everybody's individual mission so they know why they're doing it and so that they can optimize and make decisions kind of at the front line. We can move fast. We're three years or so into the generative AI world. It's not been that long. It's a very short history as you've written. What do you think the thing that most people are underestimating is? What's the thing that's not getting in the headlines or on the algorithms that when you look in the future as a futurist, that is the biggest risk to maybe a person or organization? Certainly, I believe that AI is going to disrupt work and it's going to disrupt the way that work is designed in companies. So perhaps the biggest risk is that companies don't invest the time to redesign their work around either the outcomes they want now or the outcomes that their customers are going to come to expect from them in the future. So one of my friends calls this the fire. It was so good. I was writing it down as she said it. Fire, fear, and fury. the fire of like the urgency to my god we have to do this why aren't we doing this I'm so angry the fear like oh my gosh we're gonna get swept away we have to get on this and the fury is I think the fury was maybe the aspect of like we need to do this why aren't we doing this but all of them really relating to a it's gonna take some time b we gotta get started now and we're gonna to have to invest in not just putting a technology layer on top of all of it, but in fundamentally rethinking the way we work. And I think that there are a lot of senior executives that are not thinking about that. And the flip side is that right now the technology people that I've spoken to in the last couple of years are often thinking that this is going to be completely game changing, but they're taking this tools first approach, not a systems level approach, and certainly not a people approach, right? They're thinking, what technology can I invest into this company to transform that? But they're not thinking about who are the right people to create the right outcomes in combination with technology. And those are the two things that I see that I worry about that kind of keep me up at night. And it'll have a knock on effect to individuals for the companies that don't get that right. The last question, you're a parent. I'm a parent. We're all trying to raise kids. We know that the idea of the phone and social media kind of got that one a little wrong in some ways, not only for us, but for the kids as well. If there's another parent out there listening, what advice do you give them as far as the skills they need? And I say future, five years, 10 years. How do you think about that, knowing what you know? I think that my kids being able to go back to that why is important. So being able to understand why they're making decisions and kind of unpacking that, particularly with AI, when to a certain extent, you know, when I grew up, my mother was always saying, go look in the dictionary. Don't ask me how to spell that word. And to a certain extent, my kids are asking me questions I don't know the answer to. And I'm like, go ask ChatGPT or whatever LLM. I don't know if you should go into asset management or investment banking. So I think that they need to have that thought process for themselves. And if they actually sadly listen to their mother and just go ask an LLM, then they're just going to follow the same path as everyone else. And they'll have no competitive advantage. So knowing their why and investing that time to kind of ground themselves and understand their own decision making, which in the school my children went to is a part of the process, a reflection on their own work within the semester, what they thought they did well, why they did it, how they could do better, which I think is amazing. And then people. I think being able to speak to people, to understand what the problems are, because on the execution layer, there's going to be a lot that can be done via robotics, via AI, all kinds of technology. But identifying the outcomes, identifying the priorities, identifying the wicked problems to solve, with ending collaboration with other people, to me, is going to be a lifelong asset. It was interesting, as you were saying, that a lot of it resonates with me. I'm kind of on a journey, as many people are, of restating my why and trying to understand, like, hey, what's next? And I think that's going to happen more and more and more for people because it'll change. And it'll change maybe every couple of years, maybe every year for a lot of people. Diana, thank you for taking the time and sharing your thoughts. I wish you the best on your next book and the work that you're doing around AI as a work interface and the other forward-looking insights that you have. Thank you so much, Paul, and I look forward to continuing the conversation. Diana said something that can help us all find the signal in the noise. Most companies and most people are investing in AI tools without understanding why they're doing what they're doing. You can't redesign the way you work if you don't know what the work is actually for. So here's your assignment. Before you sign up for another AI tool or attend another meeting about AI strategy, answer this. What are you optimizing for? Not what your boss wants, not what the board expects. What outcome actually matters? For your company, for your career, for your life. Because the gap between those who figure out their why and those who don't is widening every single day. Don't just consume this episode, use it. I'm Paul Estes. This is Expert Intelligence. Subscribe so you don't miss another conversation and connect with me on LinkedIn. Until next time, stay curious.