Bruce Schneier discusses his book 'Rewiring Democracy' and explores how AI can enhance democratic processes across elections, legislation, government administration, courts, and citizen engagement. The conversation covers both positive applications of AI in democracy worldwide and addresses concerns about power concentration, job displacement, and the need for public AI alternatives to corporate models.
- AI is a power-enhancing technology that amplifies existing intentions - it can make democracy better for those who support it, or worse for those who oppose it
- The dominance of large tech corporations in AI is not inevitable but rather a result of market choices and policy decisions that could be changed
- Public AI models like Switzerland's demonstrate that competitive AI can be built without corporate funding or profit motives
- AI's impact on democracy extends far beyond deepfakes to include practical applications in elections, legislation, government administration, courts, and citizen engagement
- The disruption from AI will be comparable to the industrial revolution, particularly affecting high-skilled professions and requiring fundamental changes to how society structures work and value
"AI is a power enhancing technology. If you like democracy, AI will help you make democracy better. If you hate democracy, AI will help you make democracy worse. It doesn't have an intentional stance, it takes the stance of the people who wield it."
"When you say, is it good enough? The question is, compared to what?"
"This tech does not have to be concentrated in the hands of, you know, five monopolies. In the United States, we just chose it that way."
"It's not that AI can do your job, it's that AI can convince your boss that it can do your job."
"AI doesn't cause the problems, but AI takes our existing problems and makes them worse."
Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work and create. Our goal is to help make AI technology practical, productive and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn X or Bluesky to stay up to date with episode drops behind the scenes and AI insights. You can learn more at PracticalAI FM. Now onto the show.
0:04
Welcome to another episode of the Practical AI Podcast. This is Daniel Whitenack. I am CEO at Prediction Guard and joined as always by my co host Chris Benson, who is a principal AI research engineer at Lockheed Martin. How you doing, Chris?
0:48
Hey, doing great today, Daniel. How's it going?
1:03
It's going good. You know, lots of interesting things to talk about as we head into the new year, and especially, you know, a lot of people thinking about how technology and AI especially is impacting both our daily life and geopolitics and all sorts of things. And we're really privileged today to have with us Bruce Schneier to talk about some of these things. Bruce is a fellow at the Berkman Klein center for Internet and Society at Harvard University and also we'll be discussing a little bit his new book, Rewiring Democracy, How AI Will Transform Our Politics, Government and Citizenship. Welcome, Bruce. Great to have you on the show.
1:06
Thanks for having me.
1:49
Yeah, yeah, it's of course, a lot of, as I mentioned, there's a lot of these topics that are on people's mind going into this year. Just, you know, looking at things that are happening in the news, how AI is, is factoring into those, how there's various disruptive things happening, you know, across our world, how AI might, might factor into those. Maybe, maybe you could just open us up by kind of setting the stage for why this combination of things was important for you to think deeply about, write about this kind of intersection of AI, democracy, citizenship.
1:50
You know, I've been writing about AI and technology for a while and it felt important to talk about AI not just in a corporate context or in a financial context, but in a democracy context. I mean, AI is going to affect kind of every aspect of society because society is about people and AI is in some ways an artificial people, you know, a varying quality and capability. And we can think about them in terms of companies and consumers and workers, but we also think of it in terms of citizens. And I wanted to look at AI and Democracy, how the tool interact, how AI will affect democracy. My co author is Nathan Sanders. He and I have been writing about AI and democracy. And someone very smart once told me that you should think about writing a book when you start having book length ideas. And when our essays sort of turned into something more in our head, we thought about writing a book because there's a lot going on here. I mean, you know, everyone thinks about deep fakes and they stop. But you know, to me that is the least interesting thing about AI and democracy. They're so much more interesting. And we tried to cover all of that in the book and I think we did a good job. It was a lot of fun to write.
2:29
It feels like we're at a very particular point in that there are several kind of key elements coming together in all of our lives right now. We obviously have AI disrupting lives, changing the way we live and work and what security means, obviously. But we also have these geopolitical events that are happening in the world in terms of what is democracy both here in the United States and abroad. I'm curious, how do you, as you're looking at all these big, big world events that's impacting everybody on the planet now, how do you see all these coming together in a big picture? And it feels like your book is very relevant right now. And so can you talk a little bit about the book in the context of all these things that are happening in our lives and the news right now?
3:53
Yeah, so it's interesting. There's a lot that the book doesn't say about what's happening in the news. The book is really about how AI can make democracy better. Now, AI in my mind is a power enhancing technology. If you like democracy, AI will help you make democracy better. If you hate democracy, AI will help you make democracy worse. It doesn't have an intentional stance, it takes the stance of the people who wield it. Now, I guess I want to start by laying the breadth of what I'm talking about. So again, more than deep fakes. The book is divided into five basic parts. In the first part we look at AI and elections. And that's everything from authorized AI avatars that are used in Japan and in Brazil and other countries to interact with voters. AI being used for different aspects of campaigning, setting up websites, doing messaging, AI and polling. AI helping with get out the vote door to door knocking, sort of all of AI in politics. The second part is AI and legislation, how AI is helping write, amend, debate and pass laws. And this includes a French AI model that less legislators write better law, to a Chilean model that looks at legal interactions. Sort of all the different parts of AI. In legislating part three, we talk about government administration, ways that we can use AI to make government more responsive. Now, right. Elon Musk can use AI to make government less responsive if he wants. But there are ways to use AI to make government more responsive, to figure out benefits to audit contracts or citizens or different compliance documents to help the Patent office look for prior art, all sorts of things. Part four is AI in the court system. Different ways AI can make the courts more efficient. And this ranges from Brazil using AI to help schedule judges and cases to make their courses run more efficiently, to judges using AI to maybe query the plain meaning of a term. Our fifth part is citizens. How AI can help citizens organize, make their voices heard, advocate, figure out who to vote for, who to support, sort of be better citizens. So that's the breadth. We have examples from all over the world. This is not a US book. This is very much a here are cool things happening in so many different countries and we're largely looking at the good things. We don't talk about how AI can be used to save democracy or destroy democracy. This is really, if you have a democracy and you like it, here are ways AI will change what's going on. Some of them positive, some of them negative. But it really is AI in a functional box. We wrote it mostly in 2024 and then into 2025. So, you know, it's not really about what's going on in the United States right now. It's more about democracy in general.
4:42
Yeah, I really like how you're framing, number one. It looks at this across these different areas or sectors or geographies, what have you. So you're talking about. I love what you said, AI more than sort of deep fakes. There's kind of two elements that I'm thinking about that triggers in my mind when you're talking about this. One is AI itself is quite a broad term and covers many things. And secondly, the ways that it can be applied to your point might be positive, might be negative, or there might be security related issues related to that. If we tackle the first piece of that, some people might think, oh, I'm hearing like AI as applied within the courts or within legislation, and thinking about their own personal experiences interacting with what is most visible as AI. These chat systems, general purpose chat systems, and understanding that they make mistakes and all of this stuff and sort of be worried about, well, is AI good enough? Is it A destabilizing force to those systems? Is it a stabilizing force? And also, are we, Is that the type of AI that we're talking about even? Do you, do you have any thoughts for those people out there that are kind of coming at it from that perspective of generative AI only and maybe this perspective of it not wondering if it's not good enough for these kinds of contexts?
8:12
It depends what kind of contexts there are. Right? So people's most common interactions with AI are probably their mapping app on their phone, Google Maps or Apple Maps, and the algorithmic feed of their social media. Those are probably the two places that people interact with AI the most. Notice most people don't realize that as AI. Notice that neither of those are chatting right? Neither of those are producing text. They are both predictive AI. Which video will keep you on the platform longer? Is turning left or right likely to get to your destination faster? When people look at chatbots, I think largely their opinions are formed at a moment in time, probably two years ago. Maybe they use it once and they made a mistake and said, this thing is stupid. These technologies very broadly are changing all the time. When you say, is it good enough? The question is, compared to what? So is it a non democracy application? I was at a conference in Toronto about a month ago. It's about medical uses of AI. And this is experiment with AI in an emergency room. So in an emergency room in Canada, it might be true. In the us, I don't know. They have to, after a case comes in, a bleeding person, leg severed, whatever it is, they do, whatever they do, they have to write up what they did and pass that on with the patient to wherever they're going next, whatever part of the hospital they're going next. Turns out doctors are terrible at this. They leave stuff out, they make mistakes, they forget things, they do an awful job. So the hospital experimented with having an AI that passively listened in the emergency room to all of the noise and the chaos and everyone's screaming and talking and saying things, and wrote up this after event report. It's probably got a name. And then the doctors would look at it, they would correct it, they would approve it, it would go on. It was orders of magnitude better. The doctors loved it. And they would say, oh my God, I forgot that we did do that. It made fewer mistakes. So there's an example where the AI is way better than a human. There are going to be other examples where the human is much better. You probably don't want your AI as a doctor because A human doctor is better, but again compared to Watt. Fast forward to some parts of this planet where there are no doctors and your choice isn't an AI doctor or a human doctor. Your choice is an AI doctor or nobody. There you might say, well, give me the AI. It's going to make mistakes. But you know, my brother is even worse. He'll make more mistakes. He won't know what to do at all. So it's very context dependent and depends on what you're doing. The AI that's feeding you your TikTok feed, I mean, who cares if it makes a mistake if it's an AI that's, you know, putting policemen on the street. Yeah, Mistakes matter more here. So there's never one answer to that. It always depends.
9:41
It seems to me that those are great examples and they also tying back to the book, you make the point that democracy is kind of understood as an information system and AI is operating on that system. And as you just pointed out, these capabilities where you're talking about agents and chat capabilities, being able to capture what's happened much better than the humans will and not miss the things as you're looking at things like free speech and stuff, how would you apply kind of that you take that emergency room monitoring and apply it with AI agents into the context of free speech and how does it change it? How does it amplify or tamp down on it? What does it mean for us?
12:54
I don't know if it affects it that much. Basically because it's already so bad. We're already living in a country where money equals speech and the more money you have, the more speech you have, the more power you have. The social media algorithms, even pre AI very much affected how you were heard and what you heard. I don't know if AI makes much of a difference. Astroturfing, the notion that a company is going to fake a grassroots movement way older than AI. I mean they can use AI to do that, but that is not an AI problem. A lot of times in our book we come across this. We come across the problems that AI exacerbates but doesn't cause. In all those cases, we know the solutions. They're not hard technically, they're just hard politically. Again, depending on the country you will do different things. So I'm going to pull example from Germany. Germany has a lot of political parties and it's actually kind of confusing to know what they stand for. And for decades they have had a system where the government, some government agency would summarize political parties and voters can go, go to this nonpartisan voter guide and figure out what the parties stand for. Last year they experimented with a chatbot to do the same thing. So instead of going to a static web page, you would have an interactive conversation with an AI that would tell you kind of what the party stood for and likely who you support. Younger voters like that. Now there's something where it's not going to make mistakes. Working on a very confined data set. These systems, we're learning that if you constrain them, they don't go widely off script because the script is very narrow. So we are now seeing that, you know, you don't want a massive AI model. You want an AI, it's a good travel agent, right? Or a good investment counselor or, you know, a good something, something else. So this is all changing. The, the hallucinations, they're making mistakes. We talk about mistakes and what they mean. And again, it always depends compared to what. You must say that again and again compared to what? Right. AIs aren't great drivers. Humans are terrible drivers because we drink alcohol, we get tired and AIs don't.
13:40
Well, friends, Perplexity builds AI that can answer almost any question. Miro powers visual collaboration for millions. And Mixpanel processes billions of analytics events every single day. And none of them use engineers to update their marketing websites. Think about that for a second. The company's pushing the boundaries of what software can do. Decided the smartest move for their dot com was to get engineering out of the loop entirely. Well, that's Framer. It's a website builder that works just like your team's favorite design tool, Real time collaboration, a robust CMS built for SEO, integrated AB testing. The works. Changes go live in seconds with one click. No ticket, no Sprint planning, no. Yeah, we'll get to it. And this isn't some startup compromise. We're talking about enterprise grade security, premium hosting, 99.99 uptime SLAs. That's four nines. The infrastructure is serious. The workflow is just fast. Learn how you can get more out of your.com from a framer specialist or get started building for free today@framer.com PracticalAI for 30% off a Framer Pro annual plan. That's framer.com PracticalAI for thirty percent off framer.com PracticalAI rules and restrictions may apply.
16:23
Well, Bruce, one of the things that I think about immediately when I hear the words, you know, whether it's democracy or citizenship or and especially sort of AI is questions. You even I think talked about force multiplier or power multiplier. And AI seems to be kind of having this tendency to in some ways centralize power in the sense that if you have the data centers, if you have, you know, access to larger cards or if you have, you know, a certain scale, you can maybe do things. Do you think that's an inevitability of the technology? Do you, do you think? And the other element I'm thinking of here is actually the drivers of research on the AI side are really the corporations, right, the larger tech companies that are driving cutting edge, edge research versus traditionally kind of the, the, the academic side. Not that the academic side is not doing good research, but you see a lot of this good, good, good and cutting edge research coming out of industry maybe where there's more access to compute or other things like that. So any, any thoughts on this dynamic of the centralization of power in relation to AI, how that, how that affects the question of, of democracy, democracy and citizenship one way or the other.
17:41
So it's 100% not a function of the tech. It's a function of the way we choose you, the tech. It's a function of the market. This tech does not have to be concentrated in the hands of, you know, five monopolies. In the United States, we just chose it that way. And the monopolies are powerful. And of course in the United States, money equals policy. If you're rich, you get the policy you want. So that is being perpetuated, but it's changing naturally. Deepseek taught us that you don't need the cutting edge chips and all the money to make a competitive core model. In the book we advocate something called public AI, which is, which are AI models not built by corporations, not built on the profit motive. And our book, it's largely theoretical. We talk about how this could be done. The neat thing is a few months ago it happened. Switzerland, ETH Zurich, the Supercruiter Center. Funding from the government. Another organization, I forget, produced a core model, a purchase. It was not corporate funded at all, not built on the profit motive, no illegally stolen training material, no poorly paid third world labor for fine tuning. It is free to use, you can go online and use it right now. It is competitive to the best models of last year. It's a little bit behind, but you're not going to notice it. And here's an example of doing it much cheaper and my guess, we're gonna have 20 of those in a year or so. That I think that the dominance of the big Corporations, these hundreds of millions of dollars for core models, we're gonna laugh at that in a few years. It's turned out to be much cheaper. You don't need to spend all this money, all this compute that you can be smarter. And especially we're going to need models that are more specific. We're going to need model that is sort of a good physics teacher. We're going to need a model that is a good restaurant chooser for me. Right. You know, something that will be, you know, my agent and you know, my butler model is going to call any of these dozen or two dozen specific models any of what we want. And in this world, the clauds and the GPTs and so all these massive models become archaic. So I don't know if this is true, that's my guess. But there's nothing in the tech. These are corporate decisions. Just like the models don't have to be overconfident, they don't have to be obsequious. They can say, I don't know. The tech allows them to say, I don't know. The tech doesn't require them to produce, you know, child porn on demand. And those are choices by corporations. And, you know, we can make other choices. It's hard that we're living in a world where corporations run the planet, but it is technically possible.
19:00
I'm curious. I think that's a really interesting idea. And I think it's one of those things. I think we have a bias where we automatically think of the good option as being open source. But open source and open weight models are still, you know, trained on corporate dollars and trained on corporate information access and stuff like, you know, as you just pointed out. And so like, as you have planted the seed in my own mind of kind of going, what if. Can you talk a little bit? I'm, I'm. What's playing in the back of my mind is like, how would such an ecosystem come to fruition, especially in the political climate that we have right now? There are many places in the world, but as we sit here in the United States and the fact that there is forces of influence being applied in all sorts of different ways. How would you create such an ecosystem to kind of equal what the Swiss have done in the United States or a similar nation that doesn't get those fingers of power being applied to them? It may not be corporate dollars, but there are so many other ways. How do you envision that coming about? I think it's a wonderful dream, but I will confess I'm Making a struggle on seeing how do we get from here to there.
22:07
Yeah, the US doesn't do this. Of course not. We can't even fund government weather service anymore. The US is done. Just assume that US government will just decay and not do any of this. I probably couldn't before just because we believe corporations should do all the things. Other countries don't have that bias. But you in the United States could use the Swiss model. Right. So I mean, the ecosystem is going to come from the ground up. I know people who are using Deep SEQ on their phones. It's not fast, but it's on their phones. So I think the environmental concerns will largely disappear as these models get smaller and easier to run. We saw this with search. Search used to be environmentally expensive. Now it's very cheap. I think the US will be dominated by the tech giants, by the monopolies. That's not going to go away. But other countries are trying to divest themselves. Europe's trying to build an entire tech stack. Singapore is working on a public model because Southeast Asian languages get short shift in these American models. They have a model called Sea Lion Southeast Asian languages. Something. Something that is optimized for their region and that's what they're doing. France built a model, I mentioned it before, that is optimized on French law. Taiwan's having a problem because the models in Chinese largely are trained on Chinese data, which is often translated from Russian and they use the wrong words for democracy in these models. So a lot of politics here you go on hugging face. Right now there are two and a half million public domain models. Now a lot of them are people who are personalizing Llama and a lot of them are small and toy models. But there's good stuff there and that's going to continue. So to me, the hope is this happens from the ground up, that as this becomes cheaper, more organizations can do this. Switzerland is not a big country. I am spending the year in the University of Toronto. I'm normally at Harvard. Canada could do this. They pledged a couple of billion for AI infrastructure. They could build their own model. OpenAI is offering to house an instance of their model here. Say no to that. You could build a Canadian model. You don't have to rely on Llama or any of the US Claude, or any of the US models. So I don't know. But my hope is this happens naturally as things get cheaper. The normal way I interact with these models is perplexity. And that just gives me a choice of models 7 or 8. I can choose from. I can imagine there being 100 I can choose from. I can imagine there being a perplexity scheduler that looks at my query and decides which model is best for me. I mean there are some I've paid for. We can make up how this works. Apple, you know, is trying to figure out how to put models on people's devices. I mean they are the privacy preserving of the big tech monopolies, right? I mean sort of unique of all of them. They don't make money spying on you, they make money selling you overpriced electronics. So they want something for that overpriced electronics to do. And putting the AI model on the object will be that. So I think a lot of things are happening. We just had a few years where only a few could do it because it was very expensive. And I think that is naturally going to change.
23:26
And I'm wondering how much from your perspective as you've seen the industry evolve. We talked about one level of evolution of the industry which is the model, right? Which is, is evolving in the ways that you've mentioned. And you know, different types of models are coming out and that sort of thing. But there's this element outside of the model as we've seen these kind of agentic systems develop which are like all the things around the periphery of, of the model which could be data sources, right? Which brings up a kind of access to data element. It could be other systems, right? Like I want to, you know, I, I want to analyze my, my tax return and so I need some access to some, you know, maybe IRS system or something like that. And then there's interactions with maybe non AI computer code or rule based systems that are tied into these agentic things. How do you see that element developing? Because there's a kind of this info infrastructure or access side of things which if, if you have access to a good model, it might not mean that you have access to the right systems that kind of are their own kind of force multiplier to the model.
27:22
So I think there's, there's a lot here. I mean, you know that there's a lot of software between what you type in the chat window and what the model receives and a lot more software between what the model produces and what you see. And all of that middleware is where alignment rules. I mean a lot of things get applied where the system figures out what the person wants. You know, AI doesn't get math wrong because the math is grabbed outside the AI and just done. So the AI doesn't mess that up. So I think it's gonna be a lot there. Access really feels important here. I think we're going to have haves and have nots. People have access to good models and people who don't. But you know, remember that a lot of AI is going to be thrust on people. They're not going to be choosing to use it. I mean, just like you go on Facebook and only choose to use AI, you're forced to use AI. Microsoft is doing its best to force you to use AI whenever you use a Microsoft app. Google, you know, when I do a Google search, I get an AI answer whether I want it or not. There's no way to turn it off. And you know, lots of AI is predictive AI. We are seeing that health insurers now using AI to approve or deny claims. I mean, I didn't choose that, I didn't decide that, but that is happening. I think there's a complex ecosystem. I think the AI interacting with the non AI parts of the system are really important. Who has access to what is really important? I think AI and medicine will be really interesting. Largely you won't get to touch it, but that'll be researchers using AI. But we had an AI win a Nobel prize last year basically for protein folding, right? Not a chatbot. This is an AI that is really solving complex math problems. And you know, this has to do with the fact that an AI can keep more variables in mind than a human. It can do more complex things, so might not be better, but it just might be, you know, do more complex things. Other examples, AI is now looking at your email and finding spam right now. It turns out trained humans are way better at that than AIs. But if you get a million emails, a second trained human is not just not an option. So there is another sort of system where AI is going to work even though it's not better than a person. And that only works in conjunction with your AI program. I think I'm going to see this AI in various forms integrated in all sorts of systems in all sorts of ways. But it's really important, I think for anybody listening is think of it as more than chatbots. Chatbots make the news, but it's the non chat AI that I think, you know, does more things.
28:43
So here's the thing about AI strategy. Everyone has one. Decks get presented, pilots get proposed and then Nothing. Ships. Meanwhile, 300 million AI tasks have already been automated through Zapier. Not talked about automated, actually running right now.
31:53
That's the gap.
32:09
Zapier closes. It's an AI orchestration platform that connects models like ChatGPT and Claude to the tools that your team already uses. So you can use AI exactly where you need it. AI powered workflows, autonomous agents, customer chatbots, whatever you're trying to build, you can orchestrate it with Zapier. And here's the part that might surprise you. Zapier is for everyone, tech expert, developer or not. You don't need to know ML AI or be an engineer to wire up workflows that actually work. That's kind of the point. Personally, what I like most about Zapier is that it's automation infrastructure. I don't have to babysit it. There's no worrying about uptime, there's no managing servers. I create zaps as I find a need, test them and then walk away and get back to doing what I do. That's the dream, right? So if you're tired of talking about AI and ready to put it to work for real, Zapier is how you break the hype cycle. Join millions of businesses transforming how they work with Zapier and AI. Get started for free today by visiting zapier.compracticalai that's Z A P I E R.compracticalai.
32:09
Well Bruce, you talked about in the book and the research and as you develop this, looking across what is happening in different things that intersect with democracy, whether that be the courts or government agencies or that sort of thing, political campaigns and across the world, I'm wondering if, if on the kind of positive side of things, in terms of the change that we might be about to see within government, you know, policy, democracy, what are some tangible examples that stand out to you? Either that are aspirational or that you've actually seen and researched that are ways that could be that force multiplier on that positive side of things, the, the side of things where I am pro democracy, I'm going to use this, this as a multiplier in that sense. What maybe just a couple standouts from your perspective.
33:17
I'm going to give start with two examples. One is from Japan. So there was a got basically a kid, Takehiro Anno, and a couple of years ago he was running for the mayor of Tokyo. He's kind of a young engineer, he comes in fifth out of 50. Crazy. And what he did was he built an AI avatar of himself that answered questions on YouTube. That's how he became known. I mean a really neat idea of using AI to interact with the voters. That would be a weird anecdote in history. But last year he won an election and he is now a member of their upper chamber of their legislature. He has a new political party called Team Mirai, which is like Team Future. And he is using AI to interact with his constituents that they can discuss legislation, priorities, stuff is synthesized. He learns about it, he talks to his constituents. He is building tech tools for all of the Japanese parliament to use to use AI to interact with voters. It's an amazing story of AI being used to make democracy better. That's one second story. I'm gonna go to California. There is a group called CalMatters and they are a political watchdog organization. You can find them on the Internet. What they do is they collect every public utterance of California elected officials, every floor speech, every campaign email, every tweet, everything. And they make it available and you can search to find out what your politician is saying. Something they added last year was an AI feature called Tip Sheet. What the AI does is it goes through all that information. Oh, Akari. Which also includes the voting records and who pays them their campaign contributions and finds anomalies, things that are weird, but it doesn't publish them. It makes them available for human journalists to look at. So a journalist can go to this tip sheet. It's not available normally on the website. You can't go to it. I can't go to it. Journalists can and they find things. The AI says, hey, look at this. And then the human researches the story and sees if there's a story there. A really great way that AI is assisting human journalism. All right, so I'm going to do, actually do one more. I'll do the third. This is going to be from Brazil. Brazil is an incredibly litigious society, even more so than the United States. The country spends like 1% of its GDP paying litigation against the government. The courts have, a couple of years ago, started using AI not to make decisions, but to manage the courts, to assign judges to cases, to move documents along, to do all of that stuff. It turns out it made the courts much more efficient. Used to be take two or three years to get judgments. Now it's faster. The flip side of that is that the attorneys are using AI to file more cases. So cases are being held, being dealt with more efficiently, but more cases are being filed. But still, that's a story of more justice, more democracy. So there's just three I could do. I mean, we'd mentioned Germany, mentioned France, Chile, Taiwan, lots of examples from around the world.
34:12
As I'm listening to you, those are very inspirational. I really like those examples. One of the things I'd like to do for a moment is to take a slightly out of the book and over your career you've had like, I think I mentioned to you in the pre show communications that I had learned cryptography from you. And I think of you as someone who can provide great guidance and I want to put you kind of in the moment. And as we're looking at this world where AI is just like, you know, we have so many agents and so many capabilities, we've been talking about some of those. And in a lot of cases AI is kind of applied through coercion by using products, you know, that you may not want it, but it's nonetheless there in other cases we have huge parts of populations and this isn't just a us thing, this is a global thing where you have the haves and have nots. You have people that are rejecting AI. It's impacting jobs and therefore livelihoods. And you have the power concentration that we talked about earlier in the show. As you look not to people maybe like me and Dan, who are, you know, steeped in AI, everything, all the time, but people out there that are trying to navigate this really rapid evolution out there and they may express that in various ways, they may express it politically in certain ways and. But there, there's a lot of anger, there's a lot of resentment, there's a lot of shame in terms of the world is leaving me behind kind of notion. What can people in those situations do to start. How, how do they start thinking about AI if they want to move from where they're at, which feels like they're left behind, to a world where they can still integrate into it even with these changes evolving. You know, I think, I think it's one of the big questions of the time. And so if you, if you want to take a swag, I'd love that.
37:48
So I don't be overly optimistic. This is going to be bad, okay? And this is going to be on par with the industrial revolution. Careers are going to disappear and it's not, it's going to be highly paid careers like all the apprentice professions. Doctor, accountant, lawyer, architect, investment advisor. I mean, those are all predicated on you go to school, learn how to do the thing, be a junior doer of the thing and work your way up to a senior doer of the thing. But if all the junior doers of the thing are AI, how does that pyramid even work anymore? Nobody knows. So I think this will be extraordinarily disruptive. And it'll be, you know, jobs we don't expect. It'll be like we only need 10% of the lawyers now. So what do all the law schools do? They're churning out a lot of lawyers. So I don't want to minimize this. I think this is really going to be disruptive. And that's why people who are actually thinking, are thinking about, you know, ways to untether life from employment. Whether it's something critical like universal basic income or some no brainer like, you know, tying your health coverage to your citizenship rather than to your employment. A lot of things are going to have to change. I mean, what can individuals do? It depends who the individual is. And now we're going to have to really think about job retraining at a massive level. We might be living in a world where we just don't need people to work in the same way that we used to. It used to be you needed to work to survive, now maybe you don't. And this gets back to some kind of ubi, right? If the machines are doing all the work, why are the ten hundred billionaires on the country benefiting and not everybody else? So I don't know. I mean, these are questions way bigger than me and I don't have a good handle. Some of it is we don't know how much this will happen, how fast it'll happen. I mean, right now, as Cory Doctorov points out, it's not that AI can do your job, it's that AI can convince your boss that it can do your job. So AI fires you, your job gets done lousy, and now no one knows what to do that's going to happen. So separate out the tech from the politics. We live in a world with huge inconvenient qualities in ways that make society unstable, and this is going to exacerbate that. So again, we're in a world where AI doesn't cause the problems, but AI takes our existing problems and makes them worse. We have to solve our existing problems and then we have to get to climate change. Right? This is like these are the precursors to solving the actual planetary problems. It's nutty that we can't solve sort of the humanity problems because we're stuck with democracy collapsing around the globe and income inequality running amok, making the ability to solve the other problems sort of untenable.
39:38
Yeah. Throughout all of human history, civilization has been built around the basic premise that we measure human value by productivity in some form or fashion.
42:50
You know, but not really. That's very much a, you know, European, Protestant, post post industrial revolution way of thinking. Okay, it is, it is, right. There's lots of other ways we can do it. It's just that the money won and we in the United States in the mid 21st century can't conceive anything else. But any attempt to go from here to anyplace else is going to be fought by the money. I mean, this is sort of, I think, a much bigger but a more inherent problem that you don't get change at the societal level ever in history without serious bloodshed. Because those in power like power and want to keep power, hoping it'll be different, don't know.
43:01
I think that's kind of what I was getting at was the notion that that has to change in terms of like going forward on that, I guess.
43:50
Like out of that spirit of sort of what needs to change, what are the problems that we need to address if we hone in kind of as we come to a close here, we have a lot of listeners that are builders, practitioners in the industry in one way or another, whether that be a software developer or an AI person or whoever. How would you kind of leave us as we close out here in terms of the things that we are building, the way that we are contributing to this industry and shaping this industry, what would you sort of leave us thinking about? I guess as we kind of come into this new year and, and obviously are participating in this rapidly changing environment.
43:58
Now, we have a lot of power. You know, go back 15, 20 years ago. Tech workers in general had a lot of power, had a lot of power over the companies they worked for because they could always leave and they get a new job in 10 minutes. And so what they said, what they wanted to work on, what they refused to work on, matter that has changed largely in big tech. I mean, there's a lot of layoffs in these tech companies, a lot of people looking for jobs. It is no longer a seller's market in the way it was, but it is still an AI. People who are AI researchers, AI engineers have a lot of power inside the corporations. And I want us to be more of a moral compass. I want us to say, no, we're not going to do this. If you remember, however many years ago when Google employees staged a walkout over a project they did to the Department of Defense, that's kind of the peak of tech worker power.
44:43
Maven.
45:41
Yeah, project maven, that's right. So I want us to do that. I want us to be More involved in the effects of what we do. It's hard and I know that. And this is very general technology. I said in the beginning this is power enhancing technology. That the technology doesn't know what power it's enhancing. It doesn't have a moral compass. It will do more of what you tell it to do. And you could be a good person or an evil person and it'll do more of whatever it is. But these changes are coming. I think they're going to come both slowly and then quickly. Right now we're at kind of a plateau. The new models aren't much better than the last year's models. We're not seeing improvement in ways that matter in these large language models. I think we're still seeing a lots of improvement in the predictive models, in the non generative models. But there are all sorts of new paradigms being researched. It's unlikely that Transformers are like the pinnacle of the AI data structure from now until the end of time. That seems implausible. Right. So there will be other things. Let's try to make them benefit humanity rather than, you know, a bunch of, you know, white male tech billionaires in Silicon Valley.
45:42
Well, really appreciate you kind of guiding us to those thoughts at the end here. I know that this has been an extremely interesting discussion for me. Appreciate you putting in the work on this book and exploring these ideas, doing the research and, and taking time out of that research to have this conversation with us. It's been a pleasure and really appreciate you joining.
47:03
There's no fun doing research if nobody reads it or listens to it or pays attention to it. So that's why we have these conversations. Yeah.
47:25
Well, thank you, Bruce. Look forward to having you back on the show in the future. Thank you very much.
47:33
Excellent. Thank you.
47:37
All right, that's our show for this week. If you haven't checked out our website, head to PracticalAI FM and be sure to connect with us on LinkedIn X or BlueSky. You'll see us posting insights related to the latest AI developments and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show. Check them out@prictionsguard.com also thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.
47:46
Sam.
48:16