Is AI a Threat to Democracy? Bruce Schneier Explains What Comes Next
60 min
•Feb 9, 20262 months agoSummary
Bruce Schneier discusses AI's impact on democracy, arguing it is a power-magnifying technology that can either strengthen or undermine democratic institutions depending on how it's deployed. He outlines four key strategies: reforming the AI ecosystem to reduce monopolistic control, resisting harmful uses, responsibly deploying AI where beneficial, and renovating democracy itself to withstand AI's effects.
Insights
- AI is fundamentally a power-magnifying technology that amplifies whatever goals those in control want to achieve—whether democratic or authoritarian
- The concentration of AI power in a few Western tech monopolies is a capitalism problem, not an AI problem, and antitrust action is the most critical intervention needed
- AI's impact on different professions varies dramatically; it may democratize access to justice while concentrating power among elite attorneys, depending on specific application design
- The 'AI arms race' narrative between US and China is largely marketing used by tech monopolies to justify less regulation and more investment
- Humans making mistakes in systems is manageable through established processes (audits, checklists); AI making different types of mistakes requires new frameworks we're still developing
Trends
AI-assisted government services expanding globally (voter guides in Germany, legislative drafting in France, court administration in Brazil)Automation of low-cost tasks creating arms races on both supply and demand sides (job applications, academic submissions, constituent comments)Shift toward AI as augmentation tool rather than replacement in high-stakes domains (medical diagnosis assistance, emergency room documentation)Growing political mobilization around AI regulation as a key democratic issue, with executive orders attempting to preempt state-level regulationDefenders gaining long-term advantage over attackers in cybersecurity through real-time patching and vulnerability detection, despite near-term attacker gainsEmergence of smaller, specialized AI models (Japan's DeepSeek) challenging Western tech monopoly through innovation constraintsIncreasing use of AI for regulatory compliance and audit assistance (environmental compliance in China, tax audit suggestions)Public backlash against AI framed as job-destroying technology, with corporations potentially amplifying these narrativesBlockchain's failure as a decentralization technology serving as cautionary tale for AI hype cycles and unfulfilled promisesSecond-order effects of AI adoption becoming harder to predict as first-order changes interact across multiple sectors simultaneously
Topics
AI and Democratic GovernanceTech Monopoly Power and AntitrustAI Regulation and PolicyAI in Government AdministrationAI in Legislative ProcessesAI in Judicial SystemsAI and Job DisplacementAI Augmentation vs. ReplacementTrustworthiness and Transparency in AIAI in CybersecurityAI Ethics and Design ChoicesPower Concentration and TechnologyAI in Healthcare and Medical DiagnosisAI-Generated Content and MisinformationRegulatory Compliance and AI
Companies
Google
Mentioned as example of AI being forced upon users through search results and Maps turn-by-turn directions
Anthropic
Referenced for threat report on criminals using AI to automate cyber attacks and exploit vulnerabilities
OpenAI
Implied reference through discussion of early ChatGPT being used to write legislation in Brazil
Infotech Research Group
Podcast sponsor offering AI strategy, disaster recovery, and vendor negotiation services for IT professionals
People
Bruce Schneier
Fellow at Harvard's Berkman Klein Center, security expert, and author of 'Rewiring Democracy' discussing AI's impact ...
Geoff Nielson
Host of Digital Disruption podcast conducting interview with Bruce Schneier on AI and democracy
Takahiro Anno
30-something Japanese software engineer elected to upper house of Diet using AI tools to engage constituents on policy
President Trump
Referenced for signing executive order attempting to ban state-level AI regulation
Quotes
"AI is fundamentally a power magnifying technology. It's the technology that replaces human cognition in many ways."
Bruce Schneier
"If the government's core things are arresting people and suppressing dissent, right? And being an autocracy, AI will help. If the government's things are democratic, AI will help there as well."
Bruce Schneier
"This isn't an AI problem. This is a capitalism problem. Yes, all the things that we need to do to reduce the power of the monopolists, we need to do."
Bruce Schneier
"The worst is the AI arms race. This metaphor of this arms race between US and China, that is complete and total bullshit. We're not the 1960s."
Bruce Schneier
"It's easy to predict the first-order changes. What's much harder are the second-order changes, and then when the first-order changes interact with each other to make further changes."
Bruce Schneier
Full Transcript
Hey everyone, I'm super excited to be sitting down with globally renowned technologist and security guru Bruce Schneier. He's a fellow at Harvard's Berkman Klein Center for Internet and Society and a New York Times best-selling author of 17 books, including Rewiring Democracy, How AI Will Transform Our Politics, Government, and Citizenship. What I love about Bruce is that for over three decades he's been on a crusade to make sure technology is working for the public interest and as a tool for the distribution, not the concentration of power. I want to ask him if he sees AI as an existential threat to our democracy, or if there are reasons to be optimistic that it can actually make our society stronger. And what role do we play in keeping its risks in check and making sure that it's a force for good? Let's find out. Well, thanks so much for being here today, Bruce. Really excited to jump into it with you. Maybe, you know, we can start by talking a little bit about AI and democracy, which I know is an area that you've been leaning into recently. So, you know, with that in mind, as you kind of look out at, you know, the landscape of democracy and the ways AI can impact it, what do you see as kind of the biggest risks and the biggest positives for where, you know, AI can have an impact? So I think the positives are going to be everywhere. I mean, AI is fundamentally a power magnifying technology. It's the technology that It replaces human cognition in many ways. And there are just so many areas where we don't have enough humans to do the thing properly. So, you know, I think about when I wrote my book, Rewinding Democracy, I looked at five very different areas of democracy. I looked at AI and politics and all the ways that technology will be used in the political process, running for office. uh ai in legislating the writing and debating and passing of laws ai in government administration made all the things government does whether it's uh giving benefits which is like doing the stuff of of what the laws say uh ai in the court system and then finally ai in citizens and there are there are cool things everywhere there there are risks everywhere there it's just it's just too big a question to answer in general because the technology is going to change so many things. Thematically, it sounds like you have this sort of optimistic vision of it, if I can call it like supercharging the public service, right? Like making government more effective at its core responsibilities for its citizens. Is that a fair summation or would you add some flair to that. Well, let's see, but it's not necessarily optimistic, right? So you're right. AI will help government do its core things. If the government's core things are arresting people and suppressing dissent, right? And being an autocracy, AI will help. If the government's things are democratic, AI will help there as well. So, right. So the technology isn't sort of out of the box going to help government be more democratic. It'll help the humans who are in charge of government do the things they want to do. Now, it's a little more complicated. By the end of our book, we talk about ways that we can help steer the technology towards more democracy versus less democracy, but really think of AI as a technology that will empower individuals and groups to do whatever they want to do, better, faster, stronger, more intense. So I want to talk about that steering piece because, to use your words, it's power magnifying. And so that's that's the part that, you know, I sometimes get concerned about. And I think, you know, people following get concerned about is if you've got these, you know, billionaires or oligarchs who own this platform or you've got, you know, people in power looking to accumulate more power. How do we steer that? Like, what do we have to get right to make sure that this is a tool for good for the greater good? So primarily, right, this isn't an AI problem. This is a capitalism problem. So yes, all the things that we need to do to reduce the power of the monopolists, we need to do. This doesn't change that. And in a sense, it sort of exacerbates it because now the powerful are getting even more powerful. And the fact that we're building society for the near-term financial benefit of a bunch of white male tech billionaires is just plain stupid. So, yes, I mean, a lot of things we need to do there. Antitrust is the single most important thing we can do. There are lots of others. So at the end of our book, we list four things. And this is the first one, right? To reform the AI ecosystem. Right now, it is a few companies with all the power, and that's bad. Now, I think some of the reform will happen naturally as the cost of building these models goes down. I hope we talk about that later. That's the first thing. The second is to resist harmful uses of AI. You think about the things Doge was trying to do to replace humans with poorly thought out tech. There's a lot of bad applications. You need to resist those. The third is to actually responsibly use AI where it makes a good difference. There are lots of places today where it does. I haven't been in Canada this year. I'm visiting the University of Toronto for a year. And I saw a document a few months ago of hundreds of AI pilots in Canada that are designed to make democracy better, not worse. So we want to lean into those. And finally, the sort of thing we started with, we need to reform the ecosystem. A lot of the problems with AI and democracy are existing problems with democracy that AI exacerbates. So all the solutions we have to make democracy better in this century, we need to do that. So those are the four things that we think about on how to sort of make this technology good for democracy. So it's, right, resist harmful uses to reform the AI ecosystem, to responsibly use it where you can, and to renovate democracy to make it okay when AI affects every aspect of it. So let's follow the positive sides for a minute because we've been dwelling a little bit on the negative. And the media is full of the negative. The positive stories just don't get enough airplay. I know. And it's a sad reality that we live in right now where the outrage and the fear tends to be what takes the oxygen out of the room. So, you know, on that note, Bruce, you said you've seen a bunch of use cases that, you know, have excited you or given you that feeling of positivity. What are the ones that kind of left the either filled you with the most awe or left the most sort of meaningful impression that you're most excited about? Oh, wow. So I can give I can give a few different stories. let me start with Germany. This is an easy one. So Germany has a complex political system, lots of parties, and they have long had a nonpartisan voter guide, a document that voters can read to figure out what the parties stand for and who to vote for. Just this year, they have experimented with a chatbot version of that. So instead of reading static documents, you can engage in a conversation. It turns out to be really popular. It's really easy to do. It's accurate. It doesn't make stuff up because it's kind of a narrow slice of expertise. That's just an easy way that AI is enabling citizens to understand their politics better. Second story is going to be from Japan. So this is a wacky story. So Takahiro Anno, he is a 30-something-year-old software engineer. A few years ago, he ran for mayor of Tokyo. No political experience. And what he did is he produced an avatar of himself that he put on YouTube, answered questions from voters in like this two-and-a-half-week marathon. He came in fifth out of 50, which was crazy just because he engaged AI to talk to voters. So that's a cool story. But a few months ago, he got elected to the upper house of the Japanese diet. So he is now a representative. His party is Team Mirai. It's Team Future. He is the only member of this party, and he is using AI tools because when you're a party in Japan, you get money from the government. So he's using these tools to build AI systems for sensemaking, to allow his constituents to engage with draft legislation and policy ideas and to modify them in real time. Super cool. Right. He's trying to figure out what does politics look like in the AI era? And he's an engineer, not a politician. So he might actually do it. We want to go to my third story. Let's go to South America. Brazil. So Brazil is even more litigious than the United States by a lot. They spend like one percent of their GDP on lawsuits. The government spends an additional 1% of its GDP paying lawsuits against the government. So a couple of years ago, the court started using AI not to make decisions, but to do all the administrative work around the courts, assigning attorneys to cases and just doing all that stuff. Turned out that decreased the backlog of cases dramatically. You can see the graph. So here's AI making the courts more efficient. There is a little revenge part to that story. Attorneys using AI to file cases. So the number of cases is going up. I think this is actually a good democracy story because we want there to be more cases, more decisions. I mean, that's good for democracy. So AI increasing the efficiency of both sides feels like a good thing. And I could go on. I mean, there are stories all over the planet. Now, largely, they're not generative AI stories. They're not chatbot stories. Some are, but a lot aren't. Actually, I'm going to do one more story. I'm going to do a US story. CalMatters, they're a watched-over organization in California. They basically collect every public utterance by California elected officials, every floor speech, campaign email, tweet, everything, along with their voting record and who funds them. And you can go on that site right now and look up any public official you want. So they've added AI in an interesting way. They have a program for journalists, just for journalists, called Tip Sheet. So what this is, is the AI looks through all of that data and finds anomalies, things that are weird, right? Voting records that change with contributions and rhetoric that doesn't match what they do, but it doesn't publish that. It makes it available to journalists. Journalists can look at these items on this tip sheet, decide themselves if it is worth pursuing, and then if it is, do the human work of journalism and write the story. It's a really good example of AI assisting a human process and not taking it over. All right. I can keep going for the rest of the hour, but you should probably ask another question. No, well, and there's just thank you for sharing those. And I love the breadth of them and how they use the technology in very different ways and how they highlight some of the use cases there. You know, that last one was really interesting to me because it, as you said, it empowers people and it's easy to imagine a world where the technology does it itself. It just says, well, I'm publishing the article or I'm sharing this, but it deliberately doesn't. And so I'm curious. And that's what I want. I want AI to assist the humans. So France built an AI model to assist legislators in writing legislation. Right? That sounds great. Instead of lobbyists writing legislation, this is a more nonpartisan tool. Will it make mistakes? Of course. But as long as the draft bill goes into the human process of reviewing and submitting and debating and voting, that's great. There is a story from Brazil. bill. The first known law written entirely by AI, 2023, the town of Puerto Alegre, a city council person, wanted to write a bill about water meters. Super mundane. Fired up a chat bot, early months of chat GPT, put in a prompt, got legislative text, submitted it as a bill. It was debated, voted on, it passed unchanged. So again, right, the human is using an AI as an assistant. The result goes into a human process. This is great. Now, I think this is a wonderful use of this assistive tech. If you work in IT, Infotech Research Group is a name you need to know. No matter what your needs are, Infotech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. Infotech supports you with the best practice research and a team of analysts standing by, ready to help you tackle your toughest challenges. Check it out at the link below and don forget to like and subscribe there's this idea that, well, in six months or 12 months or 18 months, it will be at that level where you don't necessarily need a human reviewing it. And I see your visual reaction to that. And so I wanted to ask you, is that coming? Is keeping humans in the loop, is that a technological decision? Is that a human and societal decision? How is this going to unfold in the next couple of of years? So it depends a lot on application. There's no single answer for that. So for example, a trained human is better at detecting spam, email spam than an AI, but you get a million emails a second. It doesn't matter how good the human is, right? The AI is going to be used because it is faster, right? You know, we know that people are better at writing, uh, you know, writing fraud emails to get you to click on a link than an AI is. But an AI is going to send those phishing emails out at a much greater scale than a human can. So it isn't just better. So right now, we know there are judges that are using AI to help draft decisions. Is it good enough for that? I don't know, but only the judge has to decide. And as long as the judge has a process of reviewing it, I'm fine with it. The judge is going to have a legal clerk. And are the results biased? Of course they are. I guarantee you that legal clerk is biased. So the human process is also biased. So there are a lot of applications where just the user has to trust it. If a candidate wants help writing a speech or if a legislator wants to use a grammar checker, they just get to decide. There are some applications where we all have to decide. So there have been AI systems that have been used for parole, to make parole recommendations or bail recommendations that have been shown to be horribly racist. Their society has to decide. Or you imagine AI being used for benefits. Right now, insurers are using AIs to look at claims. And they do a lot of denials of claims. It's not the AI's fault. That's another blame capitalism problem. That's the problem of the economic system that incents insurance companies to deny your claim. AI is just doing it faster. But if we want to use it for some kind of benefits decision that a government makes, whether it's broadly trusted matters a lot here. But again, it's design. In our book, we talk about this possibility. So the example we use are Social Security disability benefits. There's a two-year backlog for approval. People die waiting for their benefits. AI can help here. But AI could also make mistakes and be harmful. So here's a proposition. The AI is only allowed to say yes. So the AI looks at these applications and it approves the easy ones. That's all it does. So the humans who are still there, their job is now to look at the hard ones, to look at the ones that are probably no. The AI is not allowed to say no. it's only allowed to say yes. So here I've built a design in where I'm okay with maybe a higher error rate because it's worth it to clear the backlog, but I'm ensuring that nobody is inadvertently penalized because we're using AI. And all applications have to be thought about individually in that way. Is it good enough? Is it accurate enough compared to what? AI is being used to serve you ads on social media. Is it good? Who cares? It's okay. It's good enough. You mentioned earlier this notion of AI being a tool that can increase speed and increase scale. Right. And, you know, the implication of that, that there's a lot of areas and you've talked about benefits. You've talked about, you know, the judiciary system where the price of these services or the effort of these services to deliver sort of like it falls almost to zero. Right. If you look at the current state and you look at what's suddenly unlocked by this, what, you know, aside from those two, are there any other areas of opportunity that you see as being like these are key areas where you see this price falling and you think that there's going to be this kind of new societal value unlocked? Yeah, it's interesting. And price falling is somebody's good, somebody's bad. And we see this in the various arms races. So, I mean, lots of examples. People are using AI to write their resumes and apply for jobs, right? Which means they're applying for a lot more jobs, right? The cost of applying for a job drops to close to zero. You can apply for many more jobs. On the receiving end, employers are now inundated with resumes. Now they're forced to use AI to process them. This is true for submissions to academic journals. AIs are writing them and submissions are up and the journals are, you know, they don't know what to do. They probably have to use AI tools to screen articles. I was in Washington, D.C. a couple of weeks ago talking to legislative staffers who are seeing an uptick in AI written comments from constituents or from AstroTurf campaigns. And, you know, they are looking at AI tools to sort them all. So, you know, the explosion on one end forces a response on the other. College applications, same way. I know someone who runs a literary journal who basically turned off submissions because he just kept getting AI written short stories and they were terrible. And he had to use humans to read them and decide they were terrible. Didn't know what to do. So a bunch of examples here where the automation, as you say, bringing the cost of doing the thing down, make it cheaper, is going to force some response on the other end. And what the new equilibrium is, we don't know. For job applications, where people are finding people using not only AI tools to write their resumes and submit applications, but when they're on Zoom job interviews, there is a product you can buy that will listen to the conversation and suggest things you should say. It's an overlay on your screen. I mean, this is like super dystopian. And so people are going to job interviews and faking their way through it. So what does the employer do now? Do they say, look, we're only accepting job applications from trusted introducers, right? If you want to apply for the job, you need to know somebody who's already at this company and apply through them. So we know you're a legit human being. Is there another answer? I don't know yet. But a bunch of these things are happening. The Brazil example is really a story of that, right? A lot more court cases are being filed. Now the courts need to use more AI to deal with them. I want to come back to that job application piece because it AI and work and jobs like this is kind of one of the key intersections where there's a lot of questions and a lot of concern. And I mean, it got me thinking, like if someone is willing to use this tool for the job application, what's to stop them from using it on the job? And suddenly every time they will, I mean, we have people faking their job until they get caught. And so what is this a problem? Is this a remote work problem? Is this a, you know, that we don't know really how to supervise people in an AI era? I'm not sure what the answer is, but yes. Let me maybe ask a more controversial question, which is if somebody is doing this, if they're in meetings remotely like this and they're using prompts to figure out what they're going to say next and maybe AI is responding to their emails, is it bad? Like, is it necessarily bad? And what does it mean for sort of the employee employer relationship if suddenly your employees are now hybrid AI and human? I mean, it's bad to the extent that it's fraud. If you claim that you're doing the work and you're not, that's fraud. I mean, that's bad. That's probably actually illegal. You are lying about your qualifications. You're lying about what you're doing. If you were an employer, it's like, what the hell? If I wanted an AI to do this work, I would have got an AI. I hired a human because I wanted a human. You claimed you were human and qualified, and you're not. Now, certainly in the future, we can imagine people doing this and being sort of legit. And that's OK, but it is I mean, it's it's the it's the fraud that you should be worried about. So what in your mind, what is the right way to use A.I. at work? And it depends on your job. I mean, right. I mean, there's no single answer to any of this for you as a podcaster. You want me to know this way for you to use it? I don't know. And maybe it is to put in my profile and put in the book and say, suggest some questions. Maybe it is at the end, do audio editing. I'm not sure what the capabilities are, but there's no single answer here. Right. So AIs are trying to replace fast food workers. So when you go to the drive-thru, instead of talking to a human, you talk to an AI. It's mediocre at best. You'd think that would be easy. Turns out it's not. Drafting legislation, AI is really good at it. Being an attorney depends on the task. There's a lot on the task. There's some in the US who collects all the examples of AI making mistakes in legal documents. Turns out most of them are from people who represent themselves. Because that's not an AI problem, that's an access to justice problem. But an AI doing a draft of a legal brief might be great. There's sort of an implication there that I want to follow up on, which is this sense and again, coming back to this theme of concentration of power, that it feels like the implication here, Bruce, is that the best are going to get better with AI faster than the average or call it the C player. Again, this depends. This is not at all given. So when I think about this in the notion of attorneys, that there's an enormous number of startups doing AI illegal services, there's this huge amount of money in court cases. And it's AIs that will look through the millions of discovery documents that happen in these big cases that will search for illegal precedent. I know a system that watches jurors' facial expressions to see when they're paying attention. So the broad question to ask is, will these on the whole make the best attorneys better or will they on the whole make the average attorneys better? To the extent they do the former, it's more concentration of wealth and power, bad for democracy. To the extent they do the latter, we are having broader access to justice that's good for democracy. Different ones go different ways. AI programming assistants are really interesting. They seem to help the better programmers more than the average programmer because a better programmer can make better use of the tool. AIs to assist customer service representatives is the exact opposite. The best don't need it, but the average get a lot better using the assistant tool. It'll be very specific to the application. Well, and you kind of hit the nail on the head about why I was asking the question about, is this going to reinforce a concentration of power? Is it going to democratize it? And it sounds like the answer is it's complicated. It's super complicated. And it might change. These technologies are moving so fast. that what's true today might not be true in six months or a year. We've seen that with programming assistants. They've improved so much, but improved in certain ways and not in other ways. So my caution is don't assume that what you knew a year ago is true today. Now, I think about AI, we talk about this in the book, AI doing regulatory compliance. And the notion is you can, so there's an experiment that was done weirdly in China, having AIs look at environmental compliance documents and suggesting to the human auditors which environmental plans to audit, who to go visit. Now, compared to what? Probably compared to random or compared to somebody looking at it for 30 seconds. It's a really good story of AI assisting a human process. You know, I would love to see AI do audit everybody's tax return. Right. Not to issue final rulings, but to suggest to the humans, you should actually audit these. I mean, right now we have computers doing that, but they're very primitive. AI can be much more sophisticated. Now, of course, both of these applications assumes a government that one wants environmental compliance and two wants people pay their taxes. So it doesn solve a political problem And there another problem that tied up in this as well that we talked about earlier which is the trustworthiness of the AI in the minds of both the users and whoever is being impacted by its decisions. And so I wanted to ask you, how do we make it trustworthy? Or is that the wrong framing? Is this more a mindset piece? It is the right phrase. It's a very hard question. This might be my next book. That's why I'm laughing. And it depends a lot on application. Is it making a decision? Like an AI that's self-driving a car, it's a very different trust situation than an AI that is recommending what restaurant I go to tonight. The cost of getting it wrong is very different. AI that is generating text is different than AI that is predicting whether a chest X-ray is cancerous or not. In a lot of those predictive systems, we just look at the output. We can measure how good an AI system is at diagnosing chest X-rays. We can compare it to human radiologists. We can decide which is better. AI driving cars, we have records of crashes and near misses. AI producing text, it kind of depends on what it's for. If it's an AI, I meant there's an AI toy that was pulled market last month because it was recommending BDSM to children. We said, that's bad. We can imagine not wanting to do that. But if it's in an adult process that humans vet, maybe some mistakes are okay. What's interesting is humans make mistakes all the time. And we have centuries, millennia of systems to deal with human mistakes. Double entry bookkeeping, checklists of surgical elements to make sure that nothing is left in your body after we sew it up. Audits. All of these things deal with the inevitable human mistakes. If AIs make human-like mistakes, we're good. We know how to deal with that. It's the fact that AIs make different sorts of mistakes that is problematic. And we're still trying to figure out how AIs make mistakes, what they look like, how to ameliorate them, how to detect them, maybe how to prevent them. And we're still learning. It's all very complicated. But everything here depends on the application. The AI system that's going to clean up this audio is very different than the AI system that is going to try to write legislation that's going to get passed in the legislature. So with that in mind, one of the themes that you had been talking about earlier is the responsibility on each of us to encourage the right type of AI applications and to resist, you know, if I can call it the wrong types or harmful types of AI applications. Right. This is me in a democracy saying it. A lot of AI gets forced upon you. When AI is being used to evaluate your medical benefits, or when you do a Google search and there's an AI result on top of the screen, whether you want it or not, a lot of AI is being shoved on us. And we have no choice about whether to use it. We use Google Maps. AI is producing those turn-by-turn directions. Sometimes we do get a choice. And when I think about AI in a democratic sense, it's mostly in a market sense. That's where I think we need to really act collectively as citizens to try to ensure that they are sort of doing the right thing. So let's well, I'm glad you I'm glad you went there because that's exactly where I wanted to go, which is in in the world that we live in. It's it's more complicated than, you know, democracy in any sort of pure way. It's market driven and we have these market forces and corporations and investors, you know, foisting AI on us at every turn. And so. For us, you know, you and me and anyone listening now, what are the best tactics for steering toward the world that we want here? So in our book, we talk about four things. I'll list them and then I'll talk a little bit about tactics. The first is to resist harmful uses of AI in government. There are a bunch of those. When you think about what do you're trying to do to replace humans with AI, those are harmful. And we need to resist those. But at the same time, we want to responsibly use AI where it can help. There are lots of places where AI will help democracy. And we don't want to throw those out as we resist the harmful uses. An obvious one is language translation. Language translation is really good with AI. And now there is no excuse for any government on the planet not to offer all government services in every language of their citizens. We can do that. And we should do that. uh the the third is to reform the ai ecosystem but the fact that this technology is in the hands of a few large monopolistic western companies is bad and the more we can do to uh to to change that the better and the fourth is to we call it to renovate that a lot of the problems with ai and democracy are less AI problems and more democracy problems that are exacerbated by AI. So the more we can do to make democracy better, the better it will be with AI. Now, that's very broad in general. What we want to happen in the near term is for this to become a political issue. that just recently, President Trump signed an executive order trying to ban state-level AI regulation. Complete, I mean, nutty thing to do. Bad, super bad. This will go through the courts. Who knows what's going to happen? But the notion that government steps away is actually harmful. We need serious regulation, which means we need to make this a political issue. We need to talk about this political issue, talk about what we want in society. And that's things like, you know, harmful uses of AI. That's that's job loss. Sort of all the things. I mean, this matters a lot. It does. And it's it's one of the things that feels especially challenging because there's just there's so much noise around it right now from the media, from everybody trying to capitalize on, you know, fear from, you know, corporations trying to push their own agenda. to hear. I'm curious, Bruce, just on the noise in your world you're absorbing from AI. Is there any particular, if I can call it bullshit, that stands out to you that's a message that people are getting hammered with that you wish they would just ignore? I think the worst is the AI arms race. This metaphor of this arms race between US and China, that is complete and total bullshit. We're not the 1960s. This isn't the way science works today. Everything is open. Everything is collaborative. I mean, think of Japan produces this DeepSeq model because they were denied access to the best chips. They were forced to use other chips. And because they didn't have all the money, they had to actually innovate. It's a lot of cool science. That's all public. We all get to learn from that. And that is the way this technology is being developed. I think the arms race model serves the tech monopolies because it allows them to say that, you know, I mean, you don't regulate us because you lose the arms race. So I am I don't like that at all. But that is probably the worst thing that is happening now. Yeah, well, and it's I I'm inclined to agree with you. And it's you know, it's interesting to me that it's always the arms race is always a justification, as you said, for less regulation or for give us a big bag of money because we need even more investment because it's winner take all. So I'm curious then for everybody else, you know, let's flip it on its head a bit because we've talked about, you know, the biggest corporations. We've talked about the, you know, the ultimate consumers of this. If you're a business leader or, you know, a leader within government, if you're running some sort of organization and you're looking at how your organization can best make use of AI, I'm sure there's going to be a flavor of your answer that it depends on the application based on a lot of the conversation we've had so far. But are there principles or lenses that you would answer that question with if you were in that position? I mean, there are a couple of things you think about. It was something we, I don't know, it's something we've danced around. And the augment versus replace is, I think, really an important lens to look at this technology. How much does it augment your humans? How much does it replace your humans? And here, power matters a lot. So a lot of use of AI in the medical profession, the idea being that an AI can do diagnosis. And there are a few things that matter here. AI as doctor. Now, not good enough, but there are many parts of the world that are no doctors. So an AI doctor is way better, way better than no doctor. So here you can imagine it'll replace doctors because there's no doctors to replace. So there it is valuable. Here in Canada, I was at an AI and medicine conference a few weeks ago. And one hospital here is using AI in the emergency room. So I didn't know this. Emergency room patients come in after the crisis. The doctors have to document what they did and what they found out. And that goes with the patients to wherever they head after the emergency room. Doctors are terrible at that, right? They forget things. They're sloppy because, you know, it's a crisis. So they are experimenting with an AI that listens passively in the emergency room and automatically produces these reports that the human doctors review, sign off on, they go with the patient. This is way better. It's so much better. Doctors love it. They can do what they do best. And the AI helps in this record keeping and transcription. right so you know again it depends on the application but the the experiment right you know depending on what you're doing a lot of the the things i hear about negative ad ai from people who don't who never used it or don't use it or use it once when it came out oh this is terrible and then and they went away uh the stuff is changing all the time any decision you make about its suitability, will be obsolete in six months. Just know that. It'll be different. Maybe good, maybe bad, but it'll be different. So really pay attention. And it is very unique. And this notion of replace versus augment matters, accuracy, is it accurate enough compared to what? Trust, is it trustworthy enough compared to what? and lastly power how does it affect power no i i like that i like i like all of those and i think they're they're all you know really good and really different lenses that can help you know leaders or help organizations figure out is this going to be you know a suitable use of this is it going to you know put us ahead or behind and i think you know to your point the the replacing I'm worried a lot about the replacing because I'm worried that it leads to a worse outcomes and be, you know, one of the themes that we're seeing and I'm sure you're seeing it, too, is this, you know, sort of public backlash against A.I. as this, you know, net negative that takes our jobs or creates slop or makes everything worse. And I'm a bit worried that that, you know, corporations are, you know, fanning these flames. You know, so yes, I mean, I think there's a lot of that. A lot of this is capitalism. A lot of this is the system we're in. There is going to be job loss. This is going to be a technology that will change things. Sometimes you're okay. If an AI is better than a human radiologist reading my chest x-ray, I'm totally in. I'm done with human radiologists. I want the most accurate test rate diagnosis. That's all I care about. Real different than an adjudicator, a negotiator, right? We can imagine AI doing this, but maybe we want the human for a bunch of reasons. This will change over time. This will be different in culture. So never say never, but this is very transitionary right now. I wanted to take us in a slightly different direction. And capitalism has come up a few times. And I'm curious, Bruce, and I know this is an area you've talked about in the past. To me, the last big technology that promised that it was going to subvert capitalism and democratize everything and decentralize power. This program does not promise that. To be fair. Sure, sure, sure. All right. He's making sure. I agree. I agree. It probably won't. But the last technology that had a lot of hype around that was the blockchain. That that was for blockchain. That was, oh, it's going to, you know, decentralize power. It's going to have these, you know, decentralized ledgers. Everything's going to be democratized. It's going to be fantastic. It hasn't really panned out that way. There's still lots of blockchain. I don't know if you know my position on blockchain. Do you or not? Oh, I sure do. I sure do. I'm not going to keep saying that blockchain is the stupidest thing in the history of ever. Okay, good. I know. Oh, I know. Yes. So I know that, Bruce. And so, I mean, two part question, I guess. The first one is for listeners who don know your position Can you share a little bit about your like skepticism doesn even seem like strong enough a word your positioning on blockchain And then if there any sort of lessons from the blockchain bust that apply to how we talk about and think about AI. Yeah. I mean, I guess less is libertarians and morons, but we knew that already. I mean, honestly, type Bruce Schneier blockchain to Google is a long essay. It explains why it is not anonymous. It's not decentralized. It's not any of the things it claims to be. Why adding it makes security worse in every way. And pretty much all blockchain applications out there are what I call blockchain for marketing purposes only. They're there, but they don't actually do anything, which is the only way to really use blockchain. it does nothing good. So I don't want to spend the next half hour explaining it, but post the link and that is where to go. It doesn't mean Bitcoin is going away. It exists. It's bad. It's like the notion of wildcat banks and frauds and money laundering. All the bad things it does are bad. They're not going away because it's a regulatable technology in some ways, but not in other ways. The U.S. was going to regulate it, but now Trump's here. And the amount of graft it allows is so ginormous, it kind of can't be ignored. But ideally, we strongly regulate the interface between cryptocurrency and the conventional money stream. That's like it happened in the U.S., so it's going to be bad. I mean, a lot of people are going to lose a lot of money because of this. crime will get worse. So I'm not happy about the way things are going, but I get it. I mean, it is so useful for fraud that if fraud is what you want, you're in favor of it. And that's, you know, for what it's worth, that's my position as well. And I know a lot- It's actually not controversial. I mean, those of us who know cybersecurity have been saying this since the beginning. Well, and, you know, that's the piece that concerns me is we had this technology that was insanely hyped up. There was a lot of marketers saying that, you know, you have to use this. It's the best. It's going to change everything. And I completely agree with you. Like to me, it's very difficult for me to find a legitimate, if I can call it that, use case for, you know, Bitcoin and for blockchain. Unfortunately. Yeah. I mean, I wish there was. There isn't. It's a fraud vehicle. And so with that in mind, and given that we just went through this cycle, are there lessons learned for what we're seeing with AI hype and how we choose to engage with it? Because there is a lot of AI hype. These companies are burning money at a rate never been seen for the planet, desperately want some kind of revenue stream. There is a huge amount of hype. And I think companies will go bankrupt. that the hype can't sustain. But there is a there there underneath it. And this is going to be more like the dotcom bubble where the initial companies burst, but the real value was there and it was slower. But it did happen. So let me ask you a question. Just sticking on blockchain for a minute, cybersecurity, you're, I think, at heart, if I can call you a cybersecurity guy, a cryptography guy. And AI certainly has implications for this, right? It's helping defenders. It's helping attackers. I know that there's obviously some nuance there in terms of who it helps more. But let me maybe frame out the question this way. If we look at cybersecurity teams, if we look at IT teams, do you think that AI is a technology that in the next handful of years is going to shrink them or grow them? Probably do neither. So it's a lot of work being done. If you go to like the RSA conference, every company has an AI strategy. what we're seeing this like over the past six months really recently ai is doing remarkably well at automating cyber attack finding vulnerabilities and exploiting them lots of papers lots of research attackers are using them so like anthropic in a threat report talks about criminals they've barred from their system. It is just getting way better at it, faster than anybody thought. And so that is making the threat landscape worse. It's likely to continue for a while. So what it does is it makes these attacks available to a greater number of people, like script kitties can use them. At the same time, AI is being used at the defense to find and kick out attackers. There, I think the defense will get the overall advantage because you're already being attacked at computer speeds. Defending at computer speeds is enormous. So that will be good. AI vulnerability finding. They're getting better at finding vulnerabilities in source code. Again, both the attacker and defender use them. when the defender uses them, the vulnerability gets patched and it disappears. So there also, the defender will have the advantage in the end. In the midterm, it's bad because everything existing gets attacked all the time. We're seeing AI being used in all that patching. So I think the network of the future is some organic, continually evolving system that gets patched in real time. sort of at the network level. That will require rewriting your license agreement. So we haven't have that yet. So a lot of things are coming. But near term, the attackers are using it to their advantage. Long term, I think the defender will benefit more. Well, that's a really interesting one, Bruce. And one of the implications that may be, and tell me if you see otherwise, is that if it's benefiting the defenders and we're patching in real time and we can secure these systems better than ever before and change. You know, I don't know if we can make them vulnerability proof or something close to it. You know, I don't know. I mean, I think we can't, but we really can make them a lot safer. And I think AI has potential to do that. Right. And so what I'm getting at is in that world, does that sort of torque attackers even more heavily toward human vulnerabilities versus system vulnerabilities? Like, does it make humans, you know? Yeah, we are seeing AIs being used to write phishing emails. And of course, I mean, they don't have to be better. Just have to get a, you know, better scale. but I know one company and they're trying to build an AI system that sits on your computer or your phone, watches everything and warns you when you're being scammed or really warns your parents when they're being scammed because you probably have a good bullshit detector and they probably don't. That is a great idea, right? If I can do that in a way that preserves privacy or keep the information on the object and you probably can because it's a very simple model. They don't need a large language model. You could use a small language model, a medium language model. That would be great. So here we have the same technology, you know, empowering both sides. The question you're asking is who comes on top at the end is really hard to know. Well, and coming back to, you know, something I asked earlier about IT teams and you sort of said, no, they're not going to get bigger. They're not going to get smaller. It sounds like if I'm reading between the lines, they're going to be augmented in terms of what they can deliver. Augmented and in some cases replaced. I think some of the automatic patching might happen without human intervention. Right. Right. And largely that's true in my computer today, right? I mean, I set it up, patch Tuesday happens, it patches, and then I have to reboot. Different in a corporate environment than a personal environment. But I think you're going to see more automation where a lot of the drudge work is replaced by AI. But I think in other areas, the AI admits the humans who are doing maybe the investigation work, doing different kinds of work. So if you were talking to someone who was getting into cybersecurity in 2026, I mean, first of all, would you steer them clear of the field or would you say, you know, cyber is still important? It feels great fun. I mean, I love this field. I would never steer someone clear of it. This is what I like to do. Come join us. It's great. So what advice would you give them in this sort of AI-fueled world of cybersecurity now? Learn the tools. These tools will be here for the rest of your career. You need to learn how they work. But that's true for pretty much every profession. That's why I tell my students, regardless of what they're doing. Is there anything on that note that you think AI is not going to touch in the next few years? What time frame? call it call it five years oh uh lots of most everything five years is pretty short i mean i mean i can't i don't know let's back up touch is a very low bar right so it'll touch a lot of things but five years is not going to transform society people say ever like ever i mean from now to the heat death of the universe who the hell knows well and that's you know touch was a deliberate word there and touch and transform very different. Touch is effect. Transform is change. That's right. And so I'm thinking still with my student hat on, is there any field where I shouldn't be learning more about AI? I don't think so. It's funny. I was at a conference here at the University of Toronto where a classicist talked about how AI was being used in his field. I was surprised. But interpreting ancient texts, AI has a place. And a lot of this big data analysis, there's a lot of that happening. So just coming back to that sort of theme, Bruce, if I can sum it up, it sounds like when you pull out your crystal ball, look at the next few years, AI is going to be touching an awful lot of things, but transforming a handful of things, maybe farther afield, maybe never. What's sort of your top level outlook for what we're going to see? So never is a long time. Remember that. I think we're going to see a lot of things changing as people use the technology. And this is important for an individual to use it. If a student wants to use it, do their homework, nothing I can do about it. Their education, they can throw it away if they like. If a legislator wants to use it to help write a bill, That's great. So people will be using it for individual assistance in all areas of society. And then slowly, it'll do more group things like self-driving cars, which really requires new laws and ways to think about it. And it's already happening in many cities of the United States. There's an AI system that is driving cars around. I mean, turn-by-turn directions have changed the way we interact with each other. And that is largely AI-driven. So it's the second-order effects that are hard. It's easy to predict the first-order changes. What's much harder are the second-order changes, and then when the first-order changes, interact with each other to make further changes. Because everything is changing a little bit, there are a huge number of interactions, which means there'll be lots of follow-on changes. So Bruce, one last question for you. We're trying to keep things positive here. I know there's so many things that excite you on the horizon. Is there one thing that keeps you up at night around this technology or around the world in 2026 and beyond? The power of the tech monopolies is very worrisome. I mean, money in politics is a huge problem in the United States and it's probably the cause of a lot of the income inequality that we have. But specifically here, the power of the tech monopolies, they have too much power and they're using it for their own profits, not for the benefit of society. Now, this is not an AI problem. This is a society problem, but it is the problem. And it's real important when you think about the technology to separate the problems with the AI as a science and the problems with the market-based AI that we are using because of choices that these companies make. I think that's I think that's very well said. And it's it's interesting because it wraps up for me. You know, one of the themes that I'm butting my head up against in the show, which is this is not meant to be a political podcast. It's meant to be a tech podcast. But inevitably, we find there's arrows that lead back into the political ramifications. The tech is based on the political and economic environment in which it's built. So you can't separate the two. They're intertwined. But it's important when you think about AI to know where the problem is in each instance. The fact that AI is obsequious, the fact that AI will make a mistake rather than saying, I don't know, those are deliberate design choices by corporations. That is not the tech, that is the market. Well said. And I think that's a fantastic note to end on for us all to reflect about. Bruce, I wanted to say a big thank you for joining us today. This has been really interesting and really insightful. So thank you. Thank you. If you work in IT, Infotech Research Group is a name you need to know. No matter what your needs are, Infotech has you covered. AI strategy? Covered. Disaster recovery? Covered. Vendor negotiation covered. Infotech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe.