All of Our Hopes and Fears for Tech
WIRED's Uncanny Valley podcast discusses tech hopes and fears for 2025, covering failed AI gadgets like the Humane AI pin, excitement around AI agents and self-driving cars, and concerns about surveillance, AGI development, and AI bias in healthcare.
- AI hardware products are failing because they try to replace smartphones without offering compelling single-purpose functionality
- The promise of AI agents performing complex tasks autonomously represents the next major evolution in human-computer interaction
- Surveillance technology is becoming increasingly pervasive through everyday devices, creating privacy risks most people don't fully understand
- AGI development timeline remains uncertain but industry insiders believe it's imminent, raising questions about adequate safeguards
- AI bias in healthcare is already causing harm through biased datasets and automated decision-making in insurance and treatment
"It's so annoying to me that the robot cars are held to such a wild standard. I'm like, humans are horrible drivers. They're constantly getting the wrecks and then you get one cruise car that gets in a terrible wreck that's obviously awful. And then suddenly the funding's gone."
"I think the thing that I'm most concerned about that really does feel like it could come next year is AGI. So artificial general intelligence. This moment when the AI will become conscious in some way."
"I think AI has the potential, and in some ways it's doing this already, of taking our existing biases and amplifying them or automating them."
"I still think most people just don't have a very clear picture of how much information that private corporations, governments, law enforcement can capture about you."
"When I actually listen to the full podcast or read the full white paper that they're referencing, a lot of their ideas are more nuanced and in some cases more compelling than I think we give them credit for."
Your website sets the tone for your brand and is the one touch point every single one of your customers has. That's why so many companies, from early stage startups to Fortune 500s are turning to Framer, the website builder that takes your.com from a formality to a tool for growth, helping businesses build better websites faster. Framer is an enterprise grade no code website builder that gives designers and marketers the ability to fully own their.com without relying on engineers. Whether you want to launch a new site, test a few landing pages, or migrate your full dot com, Framer has programs for startups, scale ups and large enterprises to make going from idea to live site as easy and fast as possible. Learn how you can get more out of your.com from a framer specialist or get started building for free today@framer.com Uncanny for 30% off a Framer Pro annual plan. That's framer.com Uncanny for 30 percent off framer.com Uncanny rules and restrictions may apply.
0:01
Taking care of your eyes shouldn't be a hassle. That's why Warby Parker is a one.
1:05
Stop shop for all your vision needs. Our prescription glasses and sunglasses are expertly crafted and unexpectedly affordable.
1:10
Stop by a nearby store or use.
1:17
Our app to virtually try on frames and get personalized recommendations.
1:18
Did we mention we offer eye exams.
1:22
And take vision insurance too? For everything you need to see, head to your nearest Warby Parker store or visit warbyparker.com today.
1:24
That's warbyparker.com so how are you both doing?
1:32
How you been? What's on your mind?
1:37
Well, I'm a little sick this week and the people listening might detect that. And I'm sorry to say, for the people who have sent kind notes or left us reviews saying that they can't stand the vocal fry, um, it just got worse.
1:39
So those kind notes referencing the vocal fry.
1:54
That's right.
1:57
It's an extra crispy fry now.
1:58
That's right. But otherwise I'm just barreling towards the end of the year. It's been a really busy month. What's going on with you, Zoe?
1:59
Well, I'm gearing up. My parental leave is ending and I'm gonna be joining Wired officially in mid January. Yay.
2:06
Can we get like a soundtrack of clapping here?
2:13
I know, I'm excited. Excited and sad, obviously, leaving my little one at home, but I thought she.
2:15
Was gonna start working for us too. We can just set her up with ChatGPT and she can get going.
2:21
She's an intern. Yeah, she's rewiring the family vcr. Let's go back to the Sam. In preparation for that, I've been like listening to a bunch of podcasts with Elon Musk and Mark Andreessen and some of the other kind of tech elite. And one thing that's really stood out to me that I've been thinking about this week in particular is I had gotten in this habit of kind of watching the clips of these guys or kind of reading other people's take on what they were saying. Because the podcasts are so damn long. I was like, I'm not going to listen to three hours of Joe Rogan. But I have to say that when I do it, and I think I'm going to try and be really diligent about this moving forward. Like when I actually listen to the full podcast or read the full white paper that they're referencing, a lot of their ideas are more nuanced and in some cases more compelling than I think we give them credit for. And I think as a journalist, it's super important to actually, like, take seriously what they're saying and engage with it.
2:25
It's always been red pilled.
3:27
So what you're saying is sound bites aren't real?
3:29
No. Yeah, exactly. It kind of flattens the information. Mike, what's going on in your world right now?
3:32
Lately, I've been working on a lot of end of year content for Wired. We take a look back at 2024 and we take a look forward to 2025 when we publish all of this during the break. So I've just been sort of organizing and editing all of that. So at the top of my mind right now is like, looking forward to next year and what sorts of technology trends we're going to be talking about in the new year. That's actually the theme today. We're talking about the tech out there that we're most excited about and the tech that has us the most terrified for the coming year. Plus, we're going to be sharing some end of the year recommendations with you. We get into it.
3:37
Yeah, let's do it.
4:12
This is Wired's Uncanny Valley, a show about the people, power and influence of Silicon Valley. I'm Michael Kalore, Director of Consumer Tech and Culture here at Wired.
4:13
And I'm Lauren Good. I'm a senior writer at Wired.
4:22
And I'm Zoe Schipper, Wired's Director of Business and Industry.
4:25
We are now in the final weeks of 2024, and a lot has happened this year. It's been a big one, including the Release of some wild new personal technology.
4:37
What do you think was the most ridiculous product launch of this past year?
4:48
One just came to my mind, but I want to hear what you have to say.
4:52
The Humane AI pin.
4:56
Yes, I was going to say the.
4:57
Rabbit one, but that the rabbit R1 is also kind of ridiculous. Yeah, the Humane AI pin. It's the first product from this startup, Humane, that has all this pedigree. People who used to work at big companies in Silicon Valley have gone to this company to create this wearable device that you actually pin to your shirt. You talk to it and it takes photos and you can point it at things and you can ask it what you're looking at. You can hold your hand in front of it and it will project a little screen to show you notifications. All of this is in the service of just keeping your phone in your pocket because you have this thing that faces the world that is attached to your body.
4:59
When Humane AI announced their product, I was like, anything to make me look at a screen less. I'm really into like, I was actually pretty legitimately excited about it. It wasn't until the product actually came out and felt so rushed that it felt ridiculous to me.
5:37
Yeah.
5:52
Was that true for you guys?
5:52
I remember first hearing about it pre pandemic. Yeah, I had an off the record meeting with the company where I. It was a very long meeting and I sat there and by the end of it I walked out thinking, I still don't know what this thing is. I think that like they were trying to raise money during the pandemic and fascinating. Fast forward years later when it came out, I was like, oh, this thing is half baked.
5:53
That's always the promise, like it's going to make you look at your phone less. But there's two things with that. First of all, we are all very used to looking at our phones and phones are fine. Yes, we look at them a lot. But for all the things we need to do, like calling a ride, ordering dinner, dating apps, whatever we're looking at our phone for, we've gotten really, really good at making apps that work exactly how we want them. Phones fit into our lives very, very well. So for anything to come along and try to upend that, it's going to have to be extremely powerful. And then the other side of it is that all the interaction stuff is just not there. Like the chatbot controls, like voice commands to make the thing do the thing that you want it to do, like read me my emails. It's just clunky. And it's not very good yet and it's not very powerful. And the vision, I think, for these things far exceeds the skills that they can build into the devices. And that's why we haven't seen really good AI gadgets yet.
6:15
Well, they just have to be very purpose driven. Not that we want to spend the whole time talking about consumer gadgets, but there's a reason why something like the Kindle has outlasted even though we have iPads and all these other things that we can read on. It's because it's a single purpose device.
7:19
Right.
7:34
So if you're trying to come up with something that's going to make a dent in a market, and I mean, it can't completely undermine an existing product line, like it has to do some one thing really well.
7:34
Yeah, that's what I was going to say. I felt like with a humane AI pen, it needed one thing that it could do better than your iPhone that wasn't just like, I don't have a screen. Because even though, like screen that you're supposed to put on your hand was so janky, like you couldn't see it if you were in the sun.
7:45
Yeah. Going into next year, what is the thing or things that you're most excited about and the things that you think are going to make the biggest positive impacts on our lives? Zoe, you want to go first?
8:02
I don't know if this is coming next year, but one thing that really stuck out to me from Lauren's interview with Jensen Huang at the big interview, Wired's event in December was he was talking about this world where AI agents become like a much bigger aspect of how we interact with technology and the Internet. And that world felt really, really exciting to me. So basically, rather than me having to like open my phone and tell it to do everything or search for something and, you know, that's kind of like a laborious, time consuming process. I could interact with the AI and then the AI would do all those functions for me. So that feels like maybe it's five years away, but if it could come this next year, I would really welcome it.
8:16
What kinds of things would you ideally use those Asians for?
8:58
It's like the time consuming process of being like, I want to put together a photo book for my husband for Christmas. And rather than having to like, search through a thousand photos from the last year that show us and our kids, like asking the AI, hey, can you like, pull the top 20 pictures that show us all, like, smiling and looking at the camera?
9:01
Yeah. Or Even interacting with specific apps. You know, like, could you go on Airbnb and find the 20 apartments in Barcelona that meet these requirements? Okay, bad example, because Barcelona is a. Is a big deal.
9:23
You're going to get hate mail now from Spaniards who are like, don't come here.
9:38
Okay, let's say Airbnb. Bad, let's say Knoxville, Tennessee.
9:42
Okay, there you go.
9:45
But yeah, having an agent, like, do that research for you or do those sort of compiling tasks for you feels like the next natural step.
9:47
Probably it does. And, well, it requires giving accessing control to an agent, too.
9:55
But I feel like, Lauren, you've already kind of told us that AI has a lot of information on us already. Like, we've kind of ceded a fair amount of privacy and control at this point, so I feel like we should just benefit from it. Is that not true?
10:01
That's fair. And there's a difference between things that operate, I think, within the app container, if it's done in a relatively secure and private way, versus something like Microsoft Recall, which has been kind of controversial because of the way that it kind of just takes over your machine. Well, I should say records things on your machine, things you're doing on your screen. So, yeah, if there, if there are clear upsides, I'm on board. I thought it was really funny when I asked Jensen what he uses it for, and I had to ask a couple times and he was like, I use it to draft emails. I know.
10:14
That was such a weird moment.
10:44
Yeah, I know.
10:46
That's like the thing that I would use it last for. I'm like, I still want. I'm one, I feel like emails are easy to write. And two, I mean, Jensen's no doubt sending many more than I am per day and probably gets a lot more emails than I do, too. But I'm like, also, I think that writing is the thing that AI seems worse at at this point, but maybe that's just my perspective. Maybe it'll get there, though.
10:46
I love the email response chips in Gmail, and when those first came out, I was very hesitant and now I'm just like, thanks. Sounds good. Tap and send all the time.
11:06
Yeah, all the time.
11:17
It's great.
11:18
Fabulous, thanks.
11:19
Yeah, fabulous. Thanks. Awesome, thanks. Sounds great.
11:20
What are you excited for for the next year?
11:23
Self driving cars. Oh, yeah. So just a short while ago, General Motors said that it was going to stop developing self driving cars. It owned the subsidiary Cruise that was its autonomous vehicle. Cruise had an accident in San Francisco last year. GM ended up pulling its Cruise cars from the road, it was supposed to be temporarily paused and now it's just they're no longer putting any funding into it. And the CEO of gm, Mary Barra, has said that it' it's really, really expensive. They already spent $10 billion trying to develop this autonomous driving technology and it's just not core to their product and it's not core to their short term goals. That those are the challenges of developing self driving cars. That said, I mean Waymo still has their program running, they're planning to expand it. Tesla is working on this. Amazon is working on this. Zoox, we're supposedly going to see Waymo's, it's in San Francisco, Los Angeles and Phoenix now supposedly it's going to be operating in Atlanta, Miami and Austin, Texas in the near future. So I think self driving cars are about to take over some major cities and I think the technology is pretty remarkable.
11:25
It is, yeah.
12:35
It's honestly amazing. It's so annoying to me that the robot cars are held to such a wild standard. I'm like, humans are horrible drivers. They're constantly getting the wrecks and then you get one cruise car that gets in a terrible wreck that's obviously awful. And then suddenly the funding's gone. I'm just like, you guys, we have to have a slightly higher tolerance. We've been experimenting with human drivers for way too were awful at it. So let's give the robots a chance.
12:36
Yeah. And not to defend any corporate giants here, but to provide some context around that cruise collision. It was a human driver hit a person who was crossing the street against the light and that person fell in front of the robo taxi which didn't.
13:02
Know what to do, and then dragged the pedestrian and caused severe injuries.
13:21
And then that's really awful. So in addition to like a random chance and bad infrastructure design causing this collision and a human driver being at fault, in addition to all of that, the company then did not give all of the information to investigators after the crash. They tried to obfuscate it and hide it. Allegedly tried to hide it and obfuscate it and it turned into a whole thing. That's why they ended up stopping their service in San Francisco. So it was like it wasn't just like a car hit somebody. It was this kind of confluence of.
13:24
Events, to Zoe's point, like they're a lot safer than human drivers, statistically speaking. I remember when I used to live in Silicon Valley, there was one day when I was driving up Sandhill Road and I looked next to me and there was like some kid. I mean literally like a kid, like a teenager driving what was probably his parents Maserati. And he was full on Snapchatting while he was driving up Sand Hill Road. And I was like, give me the robo taxi.
13:57
I mean, that's the very road where Elon Musk and Peter Thiel were driving. And he crashed his McLaren F1 famously uninsured car because he was trying impress Peter Thiel. Elon Musk was trying to impress Peter Thiel by like flooring it as fast as he could.
14:20
I will say that the proliferation of self driving cars does mean more cars on the road, which is like not the way forward. The capital W way forward for cities when we're trying to solve like transportation and gridlock and energy use and all of those things. And I just kind of worry that cities are just going to fall back on, oh, you can just take a self driving car instead of investing in the things that they need to invest of in order to keep the streets safer. But that's just my skeptical take.
14:34
I know, I think that's the right take. And I think about that. I think about you a lot, Mike, when I'm raving about the robo taxes. Because really what would be great is having more trains and other forms of accessible and low emission transportation. Like no doubt. Sometimes I wonder if like the way forward is creating the autonomous cars, but maybe also simultaneously putting them on rails or creating rails so you have like a rail system being built alongside. I mean if we're. Yeah, I don't know. I watched.
15:00
You're talking about trains.
15:29
Well, I know.
15:29
I watched Jurassic park again recently. Have you guys seen Jurassic park in like recent years? I highly recommend so many things in that like they're using. First of all, it's CRISPR essentially, and then they're using VR headsets to prototype things. And then they have a fully electric vehicle that's on rails that takes people through the park. And I was like, this is what we should have developed.
15:30
Yeah, trains.
15:50
But in lieu of that, we hit taxis.
15:51
Okay, Mike, get us back on track. What are you most excited about for next year?
15:56
I'm going to say AI Smart glasses.
15:59
Ooh, ooh.
16:02
Wow. I really didn't expect you to say that.
16:03
Yeah.
16:05
Okay, so there's, there's a weird reason why I'm most excited about them, just because I think that they're, they're having a moment. So there are smart glasses, right? Glasses that have a display, maybe a camera or two, and they can overlay things that are digital. Onto what you're seeing in the real world, right? Like a heads up display. And then there are smart glasses that have AI built in, right. So one of the big breakouts this year, and I guess last year, but also really had a moment this year, was the meta ray ban glasses that have meta AI baked in. And you can talk to it and ask it questions, you can look at things and say, what am I looking at? Or if you're walking around the world, you can say, show me how to get to the closest 22 Fillmore bus stop and it'll give you real world directions. We just saw Google's Android xr, their sort of Gemini powered version of this meta. Orion has a more advanced version of their smart glasses. There are people building ChatGPT into smart glasses. There's a company called Solos which is doing this. So a lot of companies are showing us these things. And I do think it's funny that when Google showed us Google Glass, they showed us this like very dorky thing that nobody would ever wear. And they said, this is the future. And everybody laughed at it and said, no way am I putting that on my face. That is ridiculous. And Google said, oh, well, it's not actually going to look like this. It's going to look more just like regular glasses. And that was what, 10 years ago? More than 10 years ago. And now these companies are showing us these things and saying this is the future. And everybody's looking at them and saying, wow, those are really bulky and I would never put that on my face. And the companies are saying, oh, but it's okay because when we're done, it's gonna look just like regular glasses. And I feel like we're really at that point where it is almost something that looks just like regular glasses.
16:05
What excites you about actually using them?
17:59
So the thing about face computing in general, and particularly smart glasses, is that they are just so incredibly convenient. Like talk about something that makes it so that you don't have to pull your phone out. They really are that. You can do texting, you can do calls, you can do directions, you can do podcasts, you can do whatever you want with the voice through the glasses. And that visual element gives you a little bit of a screen, it gives you a little bit of like, you know, digital on top of the real world. That's kind of like looking at a phone, but just way more convenient.
18:01
Wait, but I feel like we just went through this with the Apple Vision Pro and no one liked face computing.
18:33
I mean, that's a Different class. Right. Like, that's mixed reality headset. That's like VR experiences. That's remote work. I'm talking about, like glasses that you can wear to work on the train or in your self driving car and have that computing layer right in front of you all the time. It's not, I'm home on my couch and I want to watch a movie or I want to play Beat Saber or I want to facetime with grandma and Grandpa. It's not that. It's all the time ambiently aware computer stuff right in front of you whenever you need it.
18:38
Well, I do feel like integrating with Ray Ban was a very smart move for Meta. I'm like, making them look cool feels important.
19:11
Yeah, like something people would actually wear. It looks just like regular glasses.
19:19
Yeah, they don't look very different from the glasses you're wearing right now.
19:22
I have to throw cold water on absolutely everything. But we are talking about wearing cameras on your face everywhere, which is like a little bit worse than carrying a camera in your phone.
19:24
Right? Right. Okay.
19:34
You know, you're having a conversation with somebody and there's two cameras pointing right at you, and the light isn't on, but it's still weird. Okay, well, we need to take a break, but when we come back, we're going to talk about the tech that we fear the most. So stay with us.
19:36
When I listen to the news, here's what I want to know. Why this story matters, who's at the center of it, and how the reporters uncovered it. And as a journalist, I want to make sure that's what you get too. I'm Elahe Izadi, co host of the podcast Post Reports. Every weekday, my colleagues and I at the Washington Post give you the context you need on the biggest stories. Healthcare tariffs, artificial intelligence. We've got you covered. Look for Post Reports wherever you listen to podcasts.
20:05
Welcome back to Uncanny Valley. So now we get to talk about what has us shaking in our boots.
20:37
Well, Mike, since you are Mr. Cold Water, by the way, can you keep it away from me? I really need, like, steam hot showers right now. I can't. I'm very sick.
20:41
Please sound great.
20:51
Please.
20:52
Thank you. Please keep the cold water away. But that said, I want to ask you that, what are you most afraid of for next year?
20:52
Surveillance.
20:58
Say more. The cameras.
21:00
The face cameras.
21:02
Yeah. I mean, it is ironic that I just said that I like AI Chatbot glasses with cameras in them, and now I'm playing, talking about the fact that surveillance is so pervasive, but it's true. I think surveillance is very pervasive, and it continues to be more pervasive all the time. And even though we write stories about it and we read stories about it, I still think most people just don't have a very clear picture of how much information that private corporations, governments, law enforcement can capture about you. We've seen a lot of action this year about geofence warrants being allowed in some contexts, not allowed in some contexts. And that's where a law enforcement agency can ask Google or Apple to say, tell me how many phones were at this protest, or tell me if this person entered this city during this date. And then the company is compelled to give that information because they have that information. Police use stingrays to track phones. There are systems like Clearview AI, which can recognize faces, and there's cameras absolutely everywhere. AI is only accelerating that. Like we were talking about at the beginning of the show. AI agents, they already know so much about you, and that's why they work so well. That's also surveillance. There's all these things that are creeping into our lives that we're, like, okay with. And that's the thing that ultimately makes me the most scared.
21:04
I actually feel like that level of surveillance is almost the more worrying one. Like, I feel like when we talk about police surveillance or whatever, it's. It's pretty easy for people to be like, well, that's a problem for other people. But I don't. I don't have anything to hide, like the classic line. But I think. And, you know, I. I think all three of us could probably reject that for various reasons. But I think when we're thinking about how we discover things that are exciting to us, how our taste is shaped, the idea of algorithmic surveillance, of companies learning our preferences and then feeding new music or movies or what have you to us based on those learned preferences. That's a level of surveillance that's. That's influencing us in really quiet or hidden ways that I think we should all be kind of concerned about.
22:26
Yeah, we should be more concerned with it. And I think we're treating it as a society, as something that's kind of fun because it's giving us new things to watch and listen to. But I think we've reached a point where, at large, we're just okay with it.
23:18
It's not necessarily that we're all okay with it, but we've just. We're sort of dipping our toes in because we're programmed at this point to want to try the new Things like, if you're not trying the new thing, you're falling behind in some way. So we end up, I think, just sharing a lot more of ourselves than we mean to.
23:29
I'll also just quickly say that I think there are a lot of people who are probably going to be taken to the streets to engage in their constitutional right to protest the US Government, and they're being surveilled. So if you're going to hit the streets, leave your phone at home, people.
23:45
It sounds like you're also concerned not just about how opaque all of these data gathering systems have become, but that there's going to be an overreach at some point.
23:59
Oh, yeah. I think the overreaches are already happening and they're just going to get worse. All right, so let's brighten things up a little bit by going to our little ray of sunshine, Zoe Schiffer. Aw, Zoe, what has you scared?
24:08
We can sometimes literally see the sunshine streaming in your window behind you down there in Southern California. So you are our ray of sunshine.
24:21
Yeah, we don't have Waymo, but we do have the sun. Well, I think the thing that I'm most concerned about that really does feel like it could come next year is AGI. So artificial general intelligence. This moment when the AI will become conscious in some way. The definition of that is not totally clear, but it's like, you know, AI that can learn on its own, it can go beyond kind of its directions, the tasks that you've laid out for it, and it can actually learn and grow kind of like a human. And I think in order to take that leap, there's an understanding of what consciousness is that we still need to tackle. We being the AI companies. I'm not involved in this. So it's like a really interesting problem and one that they're all running full speed ahead at. But I also think it's scary, and I don't feel like we have adequate safeguards in place to deal with what it means when AI becomes conscious. I feel like there are people who are like, this is way overblown and it's not going to be that big a deal. And there are people that like, well, it could end the entire world. And so, like, the gap between those two is worrying to me.
24:28
What does that actually look like? Like, when AGI starts to take over, what happens?
25:47
I feel like the fear is that it turns against us. The AI turns against its human oper and starts acting in ways that are not within our best interest, like decides.
25:52
That we don't need to use electricity for our pithy things that we do all day. It needs all the electricity in the world in order to build a better computer that it can run on.
26:03
I mean, that's kind of the fear. But, Mike, I feel like when I've talked about this with you, you've been a little bit more like this is perhaps overblown in the AI. Okay, yeah, I do talk about that. Yeah.
26:15
Why?
26:28
Because that feels like it could be comforting right now.
26:28
Well, first of all, I don't think it's coming next year. But also I think that the whole conversation about artificial general intelligence, it's the gold ring in that industry. And everybody's hyping it up and talking about it because they just want all the money. They want to be the company that's going to get the most funding so that they can go after this thing that everybody believes is like the next great leap in computer human consciousness. I don't see it. I see it as like, AI is going to be the thing that helps us do a bunch of productivity tasks and maybe we can have personal relationships with them that we've seen in movies and that we keep getting promised that's going to happen. Those things will probably happen, sure. But a computer that can think for itself and make decisions and actually affect the real world, probably not.
26:31
Okay.
27:17
I. I don't think it's an impossibility. My thing is that I have a hard time imagining what the outcome actually is. It's still just mired an abstraction for me.
27:18
Yeah.
27:28
And I think generally with new and emerging technologies, maybe I'm a little bit naive or just been very wrong before, but I feel like sometimes I get a sense of, oh, maybe this is not a good thing. But I have a hard time envisioning, you know, fast forward 10 years here was the bad outcome that came from that. Like, looking at the early days of Facebook, you know, having hosted a lot of videos and media, I remember starting to think at some point, oh, Facebook is kind of becoming a media company. But it's not a media company, it's a platform. But what does it mean that people are sharing so much information on something like Facebook, and it turns out that, like, the algorithmic bias was probably part of the problem that I, you know, didn't foresee, like, that many years ago, or misinformation and disinformation spreading at the rate that it ultimately did, or thinking about something like the early days of Uber. And Uber's earliest value proposition was, we're going to help solve Driver downtime, all those gaps in time when drivers are, they have nothing to do and they're not making any money, we're going to help solve that and also give them flex work. And not realizing that 10 years later we were going to look at that and say, oh, that was just the total exploitation of workers. And it still is. It's venture capital funded exploitation of workers. And so when I think about AGI and the potential harms, I personally am like, I'm having a hard time envisioning what those harms are, but I don't doubt that they make come.
27:28
I think the reason that I think it feels so imminent is when you talk to people who are working on this stuff, they feel like it's imminent. And so that feel, I mean that's maybe I'm buying in too much to the mythology. And I do think, Mike, you have a point that like it's in their interest to say like we're on the cutting edge. It's really, really close, give me all the money. Because it takes so much computing power to make this stuff happen. But I wouldn't be surprised if next year was the year. I guess I would say that. And then the other thing to Lauren's point that I would say is like when we think about the harm that was done by a conspiracy theory like Q, for example, I'm like the next iteration of that being spread by AI that's like become conscious and is trying to convince people that it has secret information about the government or whatever. Like that feels like it could be very convincing and very damaging. But maybe it doesn't need to be AGI to actually have that problem.
28:49
You can see that already in deepfakes and things like that that are out there. But you talk to people who have an accelerationist attitude towards artificial intelligence and they will tell you that to your point, Lauren, we couldn't imagine 10 years ago the technology that we have now. There are a lot of things that feel familiar to a person from 10 years ago. And then there are a lot of things that feel completely foreign and just mind blowing. And that's sort of where we are with AI. This is the way that FOL who, you know, have a very forward looking view of AGI and like strongly believe that AGI is coming soon. That's the way they talk about the future. So we can't really imagine it. So how can we say that it doesn't exist because we just don't have an idea in our heads that we can point to and say, yes, that's going to happen. No, that's not going to happen. Okay, Lauren, please tell us about the thing that you're most scared of.
29:45
Well, Zoe mentioned AGI and mine is also AI related, but it's more about the misuse of AI in healthcare. And this isn't necessarily just generative AI. It's really machine learning, a subset of AI. There are already healthcare tools that are built using machine learning, and the data sets that are going into those tools are already dirty data sets. They might already be biased, and so the outputs that they're giving are also biased. There's tons of research showing how, for example, people of color are often underrepresented to these AI training data sets and therefore the type of care they might receive on the other end if a clinician is using AI could also be biased. I think we're going to see more and more of this, and I highly recommend. Just for a primer on this, people, check out a series that Stat News did last year. It's an investigative series. It was a 2024 Pulitzer Prize finalist in investigative reporting. They did a series of like four or five articles called Denied by AI. And it was about how Medicare Advantage plans were using algorithms to cut off care and particularly for senior citizens in. And I mean, this is just one example of many. Obviously this is a big topic of conversation right now because of what just happened with the UnitedHealthcare CEO. But even prior to that, when we were thinking about how we were going to talk on this podcast about the fears we have of tech in the new year, my mind immediately went to AI and healthcare.
30:35
Yeah. And you know, it's really alarming to me because we've known about these dirty data sets providing bad, biased outcomes for a while, but yet the industries that make them keep cranking these tools out and big companies keep buying them to save money and to speed things up. So we're not really in a place where anybody who is a stakeholder here is interested in course correcting.
31:58
Yep, yep. And there are examples of AI doing tremendous things for patient care, like AI being used in imaging tools, drug research, drug research, drug development. There have been a couple stories published recently about people who are using LLMs to very quickly generate letters to insurance companies to actually fight back against claim denials. And so, you know, there are different ways that the tools are also going to be used to improve healthcare. And I want to remain optimistic about those. But this is stuff that's already been happening. Like, it's not just like, oh, I'm worried this could happen. This is happening now. And I'm afraid that the AI biases, particularly in healthcare, but also in hiring. Like, I think it's going to get worse.
32:27
Yeah, I feel like AI has the potential, and in some ways it's doing this already, of taking our existing biases and amplifying them or automating them.
33:13
Yes, that is something definitely we should be worried about going into the new year. Well, we need to take another break and then we're going to come right back with something a little bit bit more uplifting.
33:21
You come to the New Yorker Radio.
33:43
Hour for conversations that go deeper with people you really want to hear from, whether it's Bruce Springsteen or Questlove or Olivia Rodrigo, Liz Chaney or the godfather of artificial intelligence, Jeffrey Hinton, or some.
33:45
Of my extraordinarily well informed colleagues at the New Yorker. So join us every week on the.
33:59
New Yorker Radio Hour wherever you listen to podcasts.
34:04
What is the gift that you are dying to give or hoping to get or just your general advice about what to give this year?
34:12
I take gift giving really seriously. I think it's one of my love languages, which I tried to reject for a long time because it always felt like the embarrassing love language and then I had to accept it. Except this is a core part of who I am. But I'm putting together a photo book, kind of a year in review for my partner, my husband, that's all photos of the family. And I'm having the company Artifact Uprising do it. And it puts together these really beautiful books that feel meaningful, kind of allow you to look back on everything that's happened over the past 12 months. So that is what I'm most excited about. And this is the true test whether he listens to the full episode of the because I'm hoping he doesn't.
34:20
Nice. Lauren, what do you have for us?
35:04
Well, as I mentioned earlier, I've been a little under the weather, so I haven't done as much shopping or as gift thinking as I would normally like to. In fact, I've been offloading some of it to a bot, which we will talk about at a later point. But I did receive a little gift at the office today, which is a callback to one of our earliest episodes here. This is a box sent by Brian Johnson of Blueprint Folks. I have for us here an entire box full of goodies. Look at this giant bag of longevity protein. I am going to live forever whenever I get rid of this crud. And there's another box of something here. It's Very heavy. And I waited. I didn't. I don't even know what's in it. I waited to open it. I have to have Mike help me open it because I needed a band.
35:08
This is amazing that he sent us all this stuff.
35:55
I know.
35:58
Wait, do we have to disclose all this as gifts, as journalists?
35:58
I don't think I know what it is.
36:01
Whoa.
36:03
It's snake oil.
36:03
It's the olive oil.
36:04
Oh, my goodness.
36:05
It's the Rian Johnson premium extra virgin olive oil. This is. We definitely felt that was a bottle of snake oil.
36:06
It is.
36:11
It's called a snake oil. This is incredible. So, yeah, no, our ethics policy precludes us from accepting such expensive gifts. So at some point, I will be re gifting this. And I also. Just to be clear, this is not my wholehearted recommend recommendation for a holiday gift, but I had to share it, so thank you.
36:12
That's so funny.
36:30
That is amazing.
36:31
Thanks for indulging me.
36:31
Wait, didn't Caroline Calloway, the alleged Internet scammer, didn't she create a product called snake oil, too? It's like a beauty product of some sort.
36:32
I don't remember that.
36:43
I think she did.
36:44
I don't know why everybody's looking at me.
36:46
Mike.
36:49
Yeah, Mike, what's your recommendation?
36:50
Mine is actually, like, kind of weirdly ties into this. This unboxing we just had, because I want to recommend condiments. So everybody has that thing that they love to put on their food, right? Like, I have a friend who puts Jordanian za' atar on absolutely everything. I have a friend who loves the really expensive fancy Meyer lemon olive oil. That's like $25 a bottle and drizzles it on their. On their breakfast every day. Maybe there's, like, a chili crisp that somebody is.
36:54
I was just gonna say, right, because.
37:21
Chili Crisps can be 20 bucks.
37:24
So expensive.
37:26
So expensive. So just get them a year's supply. You know, they'll use it, and it's totally thoughtful. It shows that you care, that you have insight into their life enough to know them well enough as a person to know how to make them happy. So, yeah, that's like. That's such a good one. You know, if you can't decide, like, I don't know what their size is. I don't know if they've read this book. I don't know if they would actually use this. Get them the thing that you know they love and that you know that they will use.
37:27
That's such a good one. It's such a Good one, because it's hard to get yourself. I keep running into this problem because my brother and my mother are both chefs, so they'll come home, they'll gift me these, like, really expensive, like, for example, the Momofuko Chili Crisp. And then I'll be like, well, I'm fully addicted to that. I need it on all of my meals all the time. And then I go to buy it, and I'm like, $18 for this tiny can. No, I can't.
37:51
That's really cool.
38:13
It is. That is. Actually. That's my favorite. My favorite topping.
38:14
You can put it on everything.
38:18
Have you tried the Fly by Jing?
38:21
No.
38:22
Oh, also really good. It's the Szechuan chili one. Really Yummy. Yeah.
38:23
This is great. God, I just. I feel like I literally recommended snake.
38:26
Oil, and everyone's like, oh, Mike.
38:30
Yes. Thank you. Now I'm hungry.
38:31
Okay, well, that's our show for today. We'll be back in the new year. Thanks for listening to Uncanny Valley. If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, you can write to us@ uncannyvalleyired.com Today's show was produced by Kiana Mogadam. Amar Lal at Macro Sound mixed this episode. Jordan Bell is our executive producer. Connie Nast's head of global audio is Chris Bannon.
38:39
From prx.
39:24