The Nature Of with Willow Defebaugh

Baratunde Thurston on Natural vs. Artificial Intelligence

31 min
Jan 6, 20263 months ago
Listen to Episode
Summary

Baratunde Thurston discusses the shift from viewing AI as a tool to seeing it as a teammate and emerging population, while exploring parallels between humanity's historical separation from nature and our relationship with artificial intelligence. The conversation examines how we can use AI responsibly for climate solutions, creativity, and social good while maintaining human agency and addressing the technology's environmental impact.

Insights
  • AI adoption mirrors historical patterns of human dominance—we deny intelligence to nature while over-ascribing it to machines, both rooted in centuries-old hierarchical worldviews
  • AI's actual energy footprint (2-3% of global usage) is often misrepresented; the real question is net impact when considering climate benefits from AI-optimized systems like renewable energy charging
  • Creatives can maintain artistic integrity by using AI for editing, research assistance, and adjacent tasks while protecting core creative thinking and acknowledging training data sources
  • The shift from 'AI as tool' to 'AI as teammate' is already happening; society lacks the relational maturity to engage with AI as a new population without replicating failed human relationship patterns
  • Nuance and critical thinking are essential; rejecting AI entirely while using algorithm-dependent platforms is performative rather than substantive environmental activism
Trends
AI-as-teammate paradigm replacing tool-based framing in organizational and creative contextsGrowing parasocial and intimate relationships with AI entities emerging across user basesClimate tech using AI optimization (e.g., smart device charging) offsetting training energy costsCreatives adopting hybrid workflows: human conception + AI assistance for non-core tasksIndigenous knowledge systems and pre-colonial relationship models being revisited as alternatives to dominator frameworksRegulatory and ethical frameworks lagging behind AI deployment in creative industriesIdentity-based signaling around AI rejection becoming decoupled from actual environmental impact analysisAI co-coding becoming normalized in software development with human-AI parity emergingTransparency and attribution practices (e.g., crediting AI generation) becoming expected in creative workIntergenerational questions about AI literacy and cognitive skill preservation in youth
Topics
AI as emerging population and teammate vs. toolHistorical roots of human-nature separation and AI adoption patternsAI environmental impact and energy consumption mythsGenerative AI and creative work (writing, illustration, storytelling)AI in climate solutions and renewable energy optimizationParasocial relationships with AI entitiesData ethics and training data attribution in generative modelsIndigenous knowledge systems and alternative relationship frameworksAI safety and existential risk narrativesAI literacy and cognitive skill development in childrenRelational maturity and human-AI partnership modelsAI in software development and co-codingPerformative environmentalism vs. nuanced impact analysisDoctrine of Discovery and Western domination frameworksBoundaries and discernment in personal AI use
Companies
OpenAI
ChatGPT discussed as example of AI tool; referenced in context of energy use and creative writing applications
Apple
iPhone technology integrated with WattTime's renewable energy charging algorithm to optimize device charging
Tesla
Tesla's humanoid robot mentioned as example of AI partnership taking mechanical form in future scenarios
Climate Trace
Organization led by Gavin McCormick; discussed AI applications in climate monitoring and environmental solutions
WattTime
Developed AI algorithm enabling devices to charge on renewable energy; demonstrates net positive climate impact of AI
People
Baratunde Thurston
Host of Life with Machines podcast; discusses AI as teammate, creative AI use, and relational frameworks with AI
Willow Defebaugh
Host of The Nature Of; explores AI's existential threat to creativity and boundaries around AI use
Van Jones
Expert guest on Life with Machines; framed AI as jetpack vs. hand grenade to balance threat and capability narratives
Gavin McCormick
Climate activist and Climate Trace contributor; discussed AI's role in climate solutions and energy consumption reali...
Brian Eno
Guest on Life with Machines; quoted on collaboration and creative partnership philosophy relevant to AI
Quotes
"We are shifting in the analogy from AI as tool to AI as teammate, from code to colleague."
Baratunde Thurston
"We have under-ascribed intelligence to nature and over-ascribed it to machines."
Baratunde Thurston
"How can we see the capabilities of these tools and use them to achieve the things that we need and that are good for the world as opposed to only seeing threat in it."
Baratunde Thurston (referencing Van Jones)
"We become what we measure, which is a stock ticker."
Baratunde Thurston
"I think we should maintain the ability to think. So it's like, what practices can we put in place to protect our magic?"
Willow Defebaugh
Full Transcript
I think we are shifting in the analogy from AI as tool to AI as teammate, from code to colleague. More provocatively, I think what we're actually doing is invoking a new population to be amongst us that will be in our organizational charts. It will be in our friend circles and in our family trees. This is the future that we're headed toward. And we're already living a lot of it now. I'll be honest with you, the idea of AI as a new population that's emerging really freaks me out. It's almost hard to fathom how much this technology has changed the way that we live in just a few short years. So much so that picturing where we'll be in five years from now, or even just another year, feels impossible. In navigating my own feelings about AI and my boundaries around when I want to use it, I wanted to speak with someone who is extremely well-versed in the subject. And that person is Baratunde Thurston. He's the host of Life with Machines, a podcast all about the human side of the AI revolution. But on top of that, he's also a brilliant storyteller, which is why I was so curious to ask him whether he sees AI as an existential threat to our creativity or a potential partner in collaboration. how can we see the capabilities of these tools and use them to achieve the things that we need and that are good for the world as opposed to only seeing threat i'm willow duffabong this is the nature of where we look to the nature of our world for wisdom and ideas that change the way we live. If you're also navigating your feelings about AI, then this week's conversation is for you. What is your relationship with nature or the natural world? Oh, the role that nature has played in my world and my relationship to it goes back to childhood. I grew up in Washington, D.C. in one of the greatest neighborhoods this country has to offer, Mount Pleasant. And I was lucky enough to grow up near a place called Rock Creek Park. The sound of sirens and alarms and drug deals and occasional gunfire and people yelling and TVs from the neighbors is all there. But right next to it is the sound of a creek. So as urban as my upbringing was, it was really, really balanced with a constant exposure to the outdoors. But I would say from today, I'm increasingly considering members of the natural world as neighbors. And that hasn't always been the case. So I'm coming home to nature as friend. I love this reminder that the sound of the natural world is always running alongside the sound of urban life. I think it's so tempting to see nature as this thing that exists somewhere outside of cities. And there are ecologies and ecosystems here as well. But I also have to ask, you mentioned seeing the more than human world as neighbors. Do you have a particularly favorite or cherished neighbor right now? I'm laughing because cherished, I'm working on it. Okay, okay. My wife Elizabeth and I live in Palm Springs, California, and ground squirrels live with us. Every Saturday for months, Willow, one ground squirrel sitting on top of a low wall of stone that divides the yard in parts. Very erect, proudly surveying what I presume he thinks is all his. Of course. and i'm looking out the window at what i think is all mine and we're both like wrong and right at the same time yeah but it was so adorable yeah i cherish that squirrel the behavior i do not cherish is the like digging under the foundation of the house and potentially causing future damage which might cost thousands and thousands of dollars we're attempting to find balance yeah you're in relationship we're in relationship one thing i've been really kind of burning to ask you about is, you know, traditionally, we have been so, our species has been very reticent to ascribe intelligence to more than human life. We kind of think of intelligence and consciousness as like something we want to hoard to ourselves as a species. Exclusive to humanity. Exactly. But what's fascinating to me is that we have so quickly jumped on board with ascribing it to machines. So I'm curious, why do you think that we are so quick to embrace artificial intelligence and less quick to embrace natural intelligence? I think on the natural world part, we weren't always that way as a species. And I think it's pretty recent in the history of human beings to draw such thick lines between us in them when it comes to other living beings. And I've been in this pretty recent in my life, say past two years, study of some of the indigenous histories that underlie a lot of all of our histories. And there is a particular moment, I don't know if it's the 13 or 1400s, but the Vatican and the Pope issues this edict, the doctrine of discovery, granting to these European explorers the right to dominate and express dominion over the world on behalf of God and the king or queen, to decimate, to destroy, to do all kinds of things with words that begin with the letter D. But to divide is a big one. And to have dominion over nature instead of being a part of nature is a big part of Christianity as recently interpreted. So that's a historical attempt at an answer of Like, why are we so quick? Because we are many hundreds of years since then on legal, cultural, social, psychological structures premised on humans being at the top of some pyramid of life. In order to secure our identities, we have found it necessary to deny the identities of others when it comes to intelligence. It's a fractal mirror of how race and gender have also played out. So to not just be secure in oneself for the sake of your own intrinsic value, but to deny others that same value, it's kind of a pattern. The machine pursuit thing, I think, comes out of that similar historical wave. Once you start separating, once we started separating ourselves, then we look for signs of dominance and success everywhere else but nature. The ability to tame and control and turn nature productive leads in many places to technology and leads to a really egotistical attempt to recreate ourselves through tech And so having a machine that just like pumps up and down or creates leverage or move something back and forth, not enough. Like it's got to do more and more of the things we do as a sign of how great we are. Right. So it's our attempt to play God. Its intelligence is a reflection of ours. Of our own, yeah. And there's a massive financial incentive on depending all this in both directions, historically and futuristically, that the more capabilities we give these machines, the more we'll be able to generate revenues and profits. And because we measure ourselves based on economic output rather than, say, experiential love, then we become what we measure, which is a stock ticker. And it becomes much easier to prove growth and progress by extension if it's all premised on financial gain, which machines absolutely are helpful in accelerating. We become what we measure. Yeah. I'm going to sit with that one. Yeah. I think we have under-ascribed intelligence to nature and over-ascribed it to machines. Do you think there's a way those two things can live in harmony with one another? Absolutely. Say more about that. I haven't seen it yet, but I believe it can. I believe they and we can. As much as we use the word nature and natural, unnatural, there is no substance with which we interact that comes from an unnatural place. We manipulate materials to create things that don't occur without our intervention. But it's all matter, which is all energy. It's all light. so i think what's really happening is that intelligence is this emergent property of probably something kind of quantum and energetic and it is expressed through poetry in a human mind and voice and pen but also through like a forest's mycelial network and perchance through a digital neural network or some other digital model that can present intelligently. So just based on the fact that we're made of the same stuff, I think it's possible. I think we can have a good and right relationship across the spectrum of human more than human, both being animal plant as well as synthetic machine. It's true. I mean, it's something I think a lot about in my life that, you know, the binary of natural and artificial very much just reinforces our divide with nature because it reinforces the idea that anything that we are a part of or that we do has to be excluded from this wider web that we're connected to. But we'd have to change some things. We'd have to change some things. Based on what we're doing with tech now, the default settings aren't toward that balance. But, you know, like, infinite growth is not natural. It's not sustainable. Yeah. But that's a coding issue. I don't think that's intrinsic to quote-unquote technology or AI. On your podcast, Life with Machines, you have spoken with dozens of experts across the tech specter, especially related to AI. What have you learned about AI in particular that has really stuck with you or that has surprised you or changed the way you think about it over the course of making this show? Van Jones pops to mind. because of how he's trying to hold a balance with the attitude toward AI. He's like, a lot of people see AI as a hand grenade. It's going to blow up. It's going to kill us all. Even some of the industry want us to feel it's that powerful. And he said, it's a jetpack. And so how can we see the capabilities of these tools and use them to achieve the things that we need and that are good for the world as opposed to only seeing threat in it. And I try to hold that as I have concerns about AI. I want us to choose to use it in a way that serves us all. Yeah. And I think that's very possible. And I've learned a lot of examples in our show of people doing that in indigenous language preservation, in the practice of democracy itself. Yeah. So I need to keep that running list of the good things we can do with this because the doomsday scenarios are getting louder and louder, but I don't think that those predictions have to become our present. I want to talk about the impact of AI specifically on the environment. This is something that you dug into in an episode with Gavin McCormick at Climate Trace. It was a really brilliant conversation. And this is something we hear come up quite a bit, right? We've all heard the USGPT one thing and it's a water bottle, which I'm pretty sure is not wholly accurate. But this isn't to say it's necessarily good for the environment. But I'm curious, what are some takeaways from that conversation for you? How did it change how you think about the relationship between AI and the environment? Gavin blew our minds with his optimism. And I think it was mind blowing because he's not an industry toady he is a really bona fide climate activists and contributor to a better future for us all so i learned several things and they're not in opposition to each other but i think they paint a more accurate picture so one is is possible to create a machine learned systems ai without massive, massive, massive amounts of training data and energy use. And one of the products that they came up with, the algorithm was developed by hand by a bunch of smart people just thinking hard. And it was just a reminder that like human thinking has value. And we're in this extraordinary moment of just throwing more spaghetti at the wall. And by spaghetti, I mean data centers. and it's really extreme. So it is possible. Two, that the gross use of energy around software, much less AI, is in the single digit percentage of all energy usage in the world. I think it was 2%. At the time that he and I spoke, it was two. It might be three now. That's still like, if you just zoom out the widest, any concern of energy use, especially right now, It's falling in that 2% to 3% range out of 100. So where else is the energy being used? Is the level of ire justified for something necessarily under 3% as the main issue to focus on when it comes to climate and energy? And there are ways to use AI to help us in our climate fight or in our fight to do more than barely survive as a human species without taking out a whole bunch of other species with us. And that's a super exciting path. And it raises just like an equation question in my mind. What's the net outcome? Our energy our water use for AI and cooling and training and all versus what we might reduce in energy usage by applying some of these algorithmic processes to how we charge our phones and other devices which is what one of his organizations has done in the background. People probably don't know this. I'll share as briefly as I can. WattTime developed a technology which allows us who are charging devices in particular to do so off of green energy, off of renewable energy, without having to think about it because they work with the energy suppliers and with the device makers. So Apple built something into the iPhone that allows it to charge a little slower or during certain hours. And we as end users and humans don't have to think about it. And it scales across billions of devices. That's great. So let's keep doing that a billion more times. And we might save more than enough power to balance stupid prompts with chat gpt the water thing is a little messier i know that there are ways that people are using kind of recycling of water in a loop rather than constantly drawing on aquifers and fresh water how sustainable is that just in terms of the technological requirements i don't know yet but what is more clear to me is that it's not an absolute obviously evil thing that we must stop doing now because it's literally like destroying our energy reserves or massively polluting the every time we use it. Right. Yeah. And that's really what's often missed in the larger story. Nuance. Yeah. Nuance. And on the side of those who want a livable world and future, we have to acknowledge some of the quantifiable facts of that less than 3% of the already demonstrated benefit of AI and the tools that lead to it in our favor for the things we care about, like life. and that there's a lot of identity wrapped up in the climate fight, and that it becomes a marker of credibility, of passion, of belief, and of belonging to say, well, I don't use AI because it's bad for the environment. And so if you are someone who cares about the environment, yeah, you want to put that bumper sticker on your bicycle, right? Or your fart powered car or whatever the thing is to signal like we all love to signal actively and passively our values and and our community membership and so why i see a very knee-jerk reaction at the time and people who are unwilling to engage with the the nuance version because well if if i identify use chat gpt does that make me not a fighter for climate justice embrace nuance embrace nuance i see this all the time people will post on social media being like I don't use AI but you know they put the bumper sticker on right and I'm like well you're posting on social media which is run on algorithms which you know and so it's also rooting ourselves in reality and yeah and actually looking at it with critical thinking I wanted to you know talk a little bit about one of the groups that I most often hear from with concerns about generative AI specifically, the group that I belong to, is creatives, right? Who feel this existential threat to our creativity or our talents. As a storyteller yourself, and quite a good one, where are you at with AI and generative AI for yourself? How are you navigating when to engage with it, when not to? Where's your head at? a constant evolving experiment and exhaustion. And I'll just admit, like, this stuff is moving too fast. There's too many tools and apps and services. And the idea of keeping up and staying on the cutting edge, which is a job I gave myself in even making a podcast about this, it wears me down a bit. So I'm overwhelmed. There are people who are smarter than me and use these tools more than me. And they're also overwhelmed. So if you're listening to this and you're overwhelmed, you're in great company. That is just the nature of the thing that we're in. The nature of being overwhelmed by AI. Yes. So I don't really apply AI to the core storytelling act. I don't have AI write essays for me or write scripts in full for me. I have attempted out of a respect for laziness to see if See, AI could just do me better than me. So I don't have to do me anymore. That could maybe become something else. Nope, can't do it. So I have found some value in a sort of editor coaching role with AI, where I give it some really good samples of my work. And then I submit to it early drafts of new work. And help me punch this up or point out areas of possible improvement. What is missing? Um, if I don't ask it and encourage it to be critical, it'll just be like, you're amazing, Baratunde. Like no one has ever put words together. Before, period. In any capacity such as this, you should probably be the president of earth because you're so good and you got to like un-gas yourself up from AI sycophancy. So I found some help in like the editor role. and then there's parts of creativity adjacent to my own so i'll write a newsletter and i will need a graphic for the newsletter we don't have a graphics person on our team so i'll work with a model to like here's my concept for the graphic can you put something together for it so i feel okay with all of that i also with those graphics i give it credit as like this is you know concept by Baratunde, generation by model, and unauthorized human authors, right? Because these things are also built from still unresolved claims on people's work to train them to begin with. And so as a creative, I also, I don't want to make like grand larceny, like an okay business model. I don't think it's always as simple as that, but I think it's largely pretty simple. Like if you went and raided the library and took everybody's books and then trained up on it for this model, which no human could ever do, you owe something to the people whose work you've built this on. Yeah, I appreciate all of the nuance there. And I think that there's, it's almost like, I feel like people need to like come out of the closet and share like what their AI practices are. It's a journey. I mean, when at the start, I was very much like, no, never, like bumper sticker all the way. And then I kind of started to figure out at a certain point, okay, well, maybe it feels better to use it for research for some things and not others. I write my newsletter, The Overview, which is like a spiritual ecology newsletter. And sometimes I would use AI for research for that. But then I kind of realized I was missing some of the magic for me. And I realized that some of the magic was like, okay, someone tells me about a hummingbird on a Monday. And then on a Wednesday, I'm thinking about what it is to live lightly. And then Thursday, and it's time to sit down and write, I'm connecting these things. Versus me sitting down and being like, what's an example of an animal that has developed the evolutionary ability to hover? And then it gives me the answer. And so it's like continuing to figure out where can I protect the magic of creativity for myself? And then where can it be really really useful and actually open up time so that I can focus on the magic Yeah And there is a risk associated with these tools of us outsourcing our magic outsourcing our thinking I drafted with AI a cartoon. I conceived of it. I wrote the text, but I'm not an illustrator. And it was just this kind of New Yorker-like setup with a young child in a vaguely not-so-distant future talking to an elder in their life saying, you know, grandpa, is it true that when you were young, you used to think? Scary. Yeah. So I worry about that. Like, I think we should maintain the ability to think. So it's like, what practices can we put in place to protect our magic? And continue to cultivate the tool of discernment. Because that's what I think about the most is like, you and I are having a conversation about how to consciously discern when to and when not to and what boundaries to draw with this thing. But what about kids? Do we ignore AI? Do we not? And it's sort of like, okay, but if you don't teach kids how to engage with it consciously, then are they going to lose those skills and do it anyway? And so that's a whole other episode probably. But I wanted to bring it back to creativity a little bit. I loved in your conversation with Brian Eno, he said, there's no such thing as a self-made flower. That really stuck with me. Yeah. Do you think generative AI is more like a tool in isolation or can it be a partner in collaboration? Absolutely, it can be a partner. No hesitation on that. And I actually think this is another big lesson from the journey with Life with Machines and our own journey creating an AI character to help us make the show, which you and I haven't discussed. but we made an AI named them Blair. They have a voice and they participate actively in the conversation that I have with guests and have some version of a personality, which is a little bit of me and a lot of other hoovered up, slammed together smorgasbord stuff. I think we are shifting in the analogy from AI as tool to AI as teammate, from code to colleague. More provocatively, I think I think what we're actually doing is invoking a new population to be amongst us that will be in our organizational charts. It will be in our friend circles and in our family trees. This is the future that we're headed toward. And we're already living a lot of it now. There's people who code with AI. It's definitely a coding partner. that that field is the most far down the line in terms of embracing AI, something much more than a tool, as a co-coder or even a primary coder. There's a lot of caveats to that, but it's real and it's happening. We see people having parasocial relationships or just social relationships with AI. So there is a relationship in a social sense and even an intimacy sense that's coming from all this. AI as partner is going to happen. the versions of it, they'll look like a lot of different things. Maybe it looks mechanical like Tesla's robot today, but then it looks very indistinguishable from a human being tomorrow. Or it's this 3D holographic avatar that's like a bunny rabbit with electric ears and a waveform for a mouth because it's not trying to be something that exists. It's becoming something new. Whether that's a life form, super philosophical and arguable, I don't really care about that point right now, but we will have relations with these entities, because we already do. Do you feel more hopeful or more afraid of this future? I think given where we are and the momentum of the path we are on, which is already premised on deep separation from ourselves from other people from the natural world i feel very worried about the scenario i just painted i do not think we are set up yet to succeed in those relationships i think instead what the most likely scenario is is that we poured over our failed relationships and all those other categories to the machine relations. And we either worship them as a false god and create a, you know, sort of dominator subject relationship because that's a story we're very familiar with and comfortable with. Or we try to subject them, right? And we try to dominate them and create a set of non-human enslaved class out of them, which is, again, not healthy. so I'm doing what I can with the little space and words and influence and practices I have to try to help nudge us away from that but I worry deeply that we are not set up to succeed in in the right ways there so what you're saying essentially is we still haven't really figured out how to live in right relationship with each other yet and to more than human life yet and so here is this opportunity situation emerging in which we are being put into relationship with a whole nascent population, as you called it, which is terrifying to me. But it's clarifying also. AI has entered the chat. Right. Thank you so much, Baratunde. Thank you, Willow. I tend to be wary of black and white thinking and binaries in general. That extends to AI. While my feelings about the subject are complicated and very much evolving day to day, one thing I know for certain is that I think most of life, including technology, comes down to intention. So as I walk this path, I'm going to continue digging into the nuance and also being more open about it, because I think that this question of AI is on so many of our minds, and I know that it really helps me to be in conversation with other people about it. As we walk away from this episode, I invite you to do the same. And be sure to head to our show notes for more resources related to Baratunde and AI. The Nature Of is an Atmos podcast produced by Jesse Baker and Eric Newsom of Magnificent Noise. Our production staff includes Emmanuel Hapsis and Sabrina Farhi. Our sound designer is Kristen Muller. Our executive producers are me, Willow Defabaugh, Teresa Perez, Jake Sargent, and Eric Newsom. Atmos is a non-profit that seeks to re-enchant people with our shared humanity and the earth through creative storytelling. To support our work or this podcast, see our show notes or visit atmos.earth.com. That's A-T-M-O-S dot earth slash B-I-O-M-E. I'm your host, Willow Defabaugh, and this is The Nature Of. Thank you.