Prof G Markets

Violent Backlash: What the Sam Altman Attacks Signal for AI

34 min
Apr 15, 202613 days ago
Listen to Episode
Summary

This episode examines the violent attacks on Sam Altman and other AI executives, exploring the roots of anti-AI sentiment in America and what policy responses might prevent further escalation. Guests Bradley Tusk and Brian Merchant discuss how concentrated power, lack of regulation, and broken democratic processes are driving public backlash comparable to the Luddite movement.

Insights
  • AI industry leaders' messaging about existential risks and job displacement, while partly genuine, also serves fundraising and investor narratives, creating confusion about what threats are real versus marketing
  • Local opposition to data centers is often justified—communities object to subsidizing corporate energy costs, not technology itself, suggesting better communication and benefit-sharing could reduce conflict
  • The absence of federal AI regulation combined with industry efforts to block state-level laws is creating a democratic deficit that mirrors pre-Luddite conditions, where workers had no voice in technological change
  • Bipartisan consensus against AI exists at local and state levels, suggesting federal regulation may be politically viable if framed as consumer protection rather than innovation restriction
  • Without proactive policy on job displacement, universal basic income, or government efficiency gains from AI, public anger will likely intensify as economic disruption becomes visible
Trends
Rising anti-AI sentiment among Gen Z (44-point underwater in NBC polling) driven by job market concerns and perceived powerlessness$64 billion in data center projects blocked or delayed by local opposition in just two yearsBipartisan state-level AI regulation emerging (chatbot restrictions) despite federal inaction and industry lobbyingAI industry spending $100M+ on political action committees to influence elections and block state regulationViolent extremism tied to AI safety concerns (X-risk ideology) emerging as real threat to executivesDemocratic deficit in AI deployment decisions creating conditions similar to Industrial Revolution labor unrestShift from abstract AI concerns to concrete local impacts (energy costs, job losses) driving grassroots oppositionTech industry alliance with Trump administration creating perception of regulatory capturePotential for bipartisan federal AI framework by 2027 if political fear of 'AI election' materializesGrowing recognition that job training alone cannot address potential 10-20% unemployment from AI automation
Topics
AI Safety and Existential Risk MessagingFederal AI Regulation and Policy FrameworkData Center Opposition and Local ResistanceJob Displacement and Economic DisruptionUniversal Basic Income and Negative Income TaxDemocratic Participation in Technology DeploymentState-Level AI Regulation (California, New York)AI Industry Political Lobbying and PACsLuddite Movement Historical ParallelsConsumer Protection in AI ApplicationsChatbot Regulation and Mental HealthGovernment Efficiency Through AIAnti-AI Sentiment Demographics (Gen Z)Energy Cost Externalities of Data CentersFrontier Model Regulation
Companies
OpenAI
CEO Sam Altman targeted in two violent attacks (Molotov cocktail and shooting); company's charter emphasizes automati...
Anthropic
CEO Dario Amodei cited for public statements about AI job displacement and existential risks; competing with OpenAI f...
Meta
Mentioned as tech oligopoly with concentrated power; bankrolling pro-AI super PACs to influence elections
Google
Referenced as one of three or four tech monopolies dominating AI development and capital allocation
Amazon
Stock rose 4% after acquiring GlobalStar; example of tech giant consolidation in AI-adjacent sectors
Wells Fargo
Bank stock fell 5% after earnings report; mentioned in market update segment
Citigroup
Bank stock rose 3% after earnings; mentioned in market update segment
GlobalStar
Acquired by Amazon; described as Starlink's biggest competitor
Lemonade
Insurance tech CEO Daniel Shriver funded study on VAT-style taxation for AI-driven corporate profits
Y Combinator
Sam Altman previously headed organization; used to build momentum in AI space
People
Bradley Tusk
Political strategist and venture capitalist discussing AI regulation policy and political feasibility
Brian Merchant
Author of 'The Blood in the Machine' drawing Luddite parallels to current AI backlash and democratic deficits
Sam Altman
Target of violent attacks; central figure in discussion of AI industry messaging and responsibility
Dario Amodei
Cited for public statements on AI job displacement and existential risks; example of industry messaging strategy
Ed Elson
Moderates panel discussion on AI violence and policy responses
Daniel Shriver
Funded study proposing VAT-style taxation on AI-driven corporate profits for worker support
Andrew Yang
Referenced for universal basic income proposal as solution to AI-driven job displacement
Elon Musk
Mentioned as early backer of OpenAI; part of Sam Altman's strategy to build momentum in AI space
Mark Andreessen
Bankrolling millions into pro-AI super PACs to influence elections and block state regulation
Nick Bostrom
Originated X-risk framework about superintelligent AI that influenced industry narrative and investor interest
Quotes
"When you have a government that consistently fails to regulate technology, when you have a government that feels run by the extremes and you have a society that's generally unhappy, these are unfortunately the kind of things that come from it."
Bradley Tusk~12:00
"The Luddites who actually registered, this is one of the things that people get wrong about the Luddites today is they weren't dummies, they weren't backwards looking, they understood quite well what was happening. They were technologists."
Brian Merchant~15:00
"When you're talking about, hey, this is gonna wipe out lots of jobs, what investors hear is, this will be the tool instead that businesses are gonna use to replace workers."
Bradley Tusk~25:00
"We are in a world where I believe every policy outcome is driven by a political input, politicians are thinking about their next election. They're thinking really about their next primary basically."
Bradley Tusk~42:00
"If this isn't the wake up call that people need, then I really don't know what is."
Brian Merchant~58:00
Full Transcript
Support for this show comes from Virgin Atlantic. A lot of people dread flying. I've been on some bad flights and I've been on some truly miserable flights. But it's a whole different story when an airline shows up for you and the crew treats you like a VIP. Virgin Atlantic offers warm, one-on-one service from the moment you step on board. Its upper class cabin features four course meals, fully lay flat seats and drinks delivered on demand. Make the journey as exceptional as a destination when you fly Virgin Atlantic. Go to virginatlantic.com to learn more. Support for today's show comes from Dell. Dell PCs with Intel inside are built for the moments you plan and the ones you don't. There for those all night study sessions, the moments you're working from a cafe and realize every outlet is taken. The times you're deep in your flow and can't be interrupted by an auto update. That's why we build tech that adapts to you. Built with a long lasting battery so you're not scrambling for an outlet and built in intelligence that makes updates around your schedule, not in the middle of it. Find technology built for the way you work at dell.co.uk forward slash Dell PCs. Built for you. Support for the show comes from Odoo. Running a business is hard enough. So why make it harder with a dozen different apps that don't talk to each other? Introducing Odoo. It's the only business software you'll ever need. It's an all in one fully integrated platform that makes your work easier. CRM, accounting, inventory, e-commerce and more. And the best part, Odoo replaces multiple expensive platforms for a fraction of the cost. That's why over thousands of businesses have made the switch. So why not you? Try Odoo for free at odoo.com. That's odoo.com. Today's number, 12,000. That's how many comments Trump received on an image he posted of himself depicted as Jesus before it was taken down. According to the president, he meant to be portrayed as a doctor. That was right after he called the Pope weak and terrible. And in other news, panic on Wall Street as traders prepare for the rapture. Money markets met. If money is evil, then that building is hell. Show goes on! The president never watched a show sale! Welcome to Proffview Markets. I'm Ed Elson. It is April 15th. Let's check in on yesterday's market vitals. The major indices rose as President Trump signalled he was open to talks with Iran that pushed the Nasdaq to its 10th straight gain, while the S&P 500 came close to a record high. Oil prices fell below $100 a barrel. Bank stocks were mixed after earnings. The Wells Fargo fell 5%, while Citigroup rose nearly 3%. We'll be breaking down all those bank earnings on tomorrow's episode. And finally, Amazon shares rose nearly 4% after the company acquired Starlink's biggest competitor, GlobalStar. Okay, what else is happening? AI has a popularity problem, and it is now getting violent. Last week in Indiana, a local councilman's home was shot at 13 times after he voiced support for a data center project in his town, a sign reading, quote, no data centers was left at his door. Then, Sam Altman, the open AI CEO, was targeted twice in the same weekend. A man threw a Molotov cocktail at his home on Friday and threatened to burn down open AI's San Francisco headquarters. Police recovered a document from the suspect, warning of humanity's, quote, impending extinction from AI, as well as a list of names and addresses of CEOs and investors of AI companies. The 20-year-old has been charged with attempted murder and faces a second count of attempted murder for the security guard who was at Altman's house. Separately, two people were arrested for firing shots at Altman's house on Sunday. These attacks are an extreme manifestation of the rising anti-AI sentiment in the US. Among 31 countries surveyed, Americans reported the lowest level of trust in their own government to regulate AI at just 31%, and people are now acting on that distrust. In just two years, $64 billion of data center projects have been blocked or delayed due to local opposition. So, here to discuss these disturbing headlines and AI's general popularity problem in this country, we are having another panel discussion with two experts. We've got Bradley Tusk, founder and CEO of Tusk Ventures and Brian Merchant, tech journalist and author of The Blood in the Machine, Substack. Bradley and Brian, thank you very much for joining me on the show here. Bradley, I'll start with you. I mean, this news of Sam Altman, two attacks in the span of just a few days, it really is just a striking example of this growing feeling in America that I've talked about, I know you've talked about, and that is a lot of people just don't like AI at this point. What do you make of this news and what does it say? Yeah, I mean, I think people don't like a lot of things. And to be clear, regardless of what you think of either AI or Sam Altman, no one should be throwing Molotov cocktails at his home or at Evelyn's home. But I think you've got a combination of one, just general distrust and happiness in this country, whether it is the fact that we are 23rd in the World Tapets Report or 62nd for people under the age of 25, whether it's the fact that our government seems to be hijacked by a string of some both sides of the aisle, whether it's the fact that we haven't regulated internet 2.0 yet. So even things like social media have never been dealt with by Washington, let alone AI. And then combined with that with the fact that AI is really unpopular. I saw a YouGov bowl that showed that people have a 47 to 27 by margin of people distrust of AI. People think that AI will replace all Molotovs and it creates almost every different survey mechanism out there shows that people are fearful and then anecdotally, when you just talk to people, they feel the same way than you mentioned in the intro, local opposition blocking the construction of data centers. I think that's often the fault of the hyperscalers who seem to think that it would be okay to pass along all of their energy costs to regular consumers, right? And if you are living your data center, the idea that your electricity bills should go up 30, 40% to subsidize Sam Altman or Jensen Wong or whoever it is so that they can become trillionaires is unacceptable. And in this case, I think it's actually elected officials on both sides of the aisle acting in the interest to protect their constituents. And so yeah, when you have a government that consistently fails to regulate technology, when you have a government that feels run by the extremes and you have a society that's generally unhappy, these are unfortunately the kind of things that come from it. Brian, you've written about this before and your book is about actually the Luddite movement, which is sort of the first iteration of technology coming along, people getting very worried about it and revolting essentially. What do you make of the attacks on Sam Altman? What does it say to you? Well, I mean, what it says to me is that this discontent, these grievances that people have are real, they are pronounced and we have to look at them as if some of these people are obviously on the extreme end of whether it's a political spectrum or an ideology, at least one of these shooters was, one of these ex-risk AI safety advocates who's really worried that AI is going to rise up and become sentient and humanity. And so if you believe that, then doing all you can may look like a rational outcome as important as it looks to everybody else. And to step back a second, we do have a long history, when there is a disruptive technology, number one, number two, that is being developed and sort of unleashed by a particular sort of group of interests, right? When you have in the Luddites time, that was the factory owners who were spearheading factorization and automation, and they were doing it without community input, without asking what workers and communities, what they wanted. So we have a dynamic that looks an awful lot, like what's happening here today, where you have a few industrialists who had the backing of the state, they had all the resources, they had all the capital, they had all the power, and they were saying, this is the way it's gonna be. We're gonna automate jobs this way, and you're either gonna sort of work in our factory or you're gonna get out of the way. And the Luddites who actually registered, this is one of the things that people get wrong about the Luddites today is they weren't dummies, they weren't backwards looking, they understood quite well what was happening. They were technologists, they used this stuff every day, they used the automated technologies and smaller iterations in their fact, in their workshops and at home. And so they understood what the industrialists were trying to do, and that's what motivated their response. They didn't want to see their way of life subsumed by factorization, given over to a relative handful of interests. So it was really about power, it was about democracy, and it was about losing agency. And so today, a lot of the backlash we see against AI is motivated by these very same fears and concerns, in no small part, because the AI CEOs and tech titans themselves have come out and used this language, right? From the beginning, they've said, oh, this technology is so powerful, it could be big trouble for humanity, it could be the gravest thing humanity's ever faced, if we're not careful with it, it's gonna eliminate 20 to 30 to 50% of jobs, depending on how Dario Amade, anthropic is feeling. And it is going to be this huge disruptive event, and that's how they're forecasting, that's how they're describing their own project, their own business. And so again, why would anybody, not take that seriously, right? We take it seriously at different levels, but, and some people will attach themselves to the X-risk element and say, well, we don't want to exterminate humanity, and most people will say, hey, I'm out here listening, and you're saying, you wanna automate all the jobs with AI tools, you wanna automate, well, why would I be okay with that? Why would I trade that for a, why would I allow a data center in my backyard to help you in that project? So to me, all of this backlash, I'm honestly a little surprised it hasn't arrived a little bit sooner, just how aggressive the industry and its leadership has often been. Yeah, yeah, this gets to the sort of the PR and comms point. And Bradley, I mean, you've worked in exactly this sector, you've worked in politics, you've worked in tech and politics and how they come together. And there is this interesting question, which is like, well, all of the big AI CEOs are telling us that this technology is, in a lot of ways, quite scary, and in some cases, bad. It's going to destroy things, it's going to destroy white collar work, it's going to completely disrupt the economic model as we know it. And they've done it in a way that is legitimately quite scary. And I guess it does beg the question of like, I mean, why say that? If you're the CEO of a technology company, why would you come out and say, this technology is gonna be really bad and it's gonna really negatively impact a lot of people's lives? I mean, what do you make of the comms strategy there? Yeah, I mean, I think that keep in mind, from their perspective, comms is a couple of different things at the same time. It's the way we're talking about it right now, which is how the public might perceive something, how regulators and lawmakers might perceive it, but it's also fundraising. So Open AI and Anthropica are still both privately held companies with giant valuations, Open AI is nearly a trillion dollars at this point. And as they raise money, a lot of what you just said, interpreted slightly differently, is very appealing potentially to investors, right? So when you're talking about, hey, this is gonna wipe out lots of jobs, what investors hear is, this will be the tool instead that businesses are gonna use to replace workers. And instead they're gonna pay money to Open AI, to Anthropica, to all of these different companies. And so I think that the language that you use potentially to recruit employees, so the New Yorker has a great piece this week on Sam Altman, and a lot of the recruiting that he did at Open AI, was around the idea that he was the responsible person trying to protect humanity from the potential perils of AI. That clearly does not seem to be the case, but he used that language to incentivize people who did care about this issue genuinely to come work for him. There's language they use with investors. And I think what they're finding right now, and I think sometimes this is sort of the both naivete and arrogance that you will see in the tech world, which is a lack of understanding of how their words then land with real people, or with people in politics and government. And a lot of what they're saying is now coming back to haunt them. But the real question to me is, we know that the public is concerned. And we have seen at least at the local level, elected officials protect consumers from things like paying for the costs of the energy needs of data centers, but when it comes to the larger issue of catastrophic risk, states like New York and California have done some regulation around frontier models, but some of this really needs to be done at a federal level. And right now we're seeing the opposite from this White House. So we saw this White House has issued an executive order in December telling states, you're not allowed to regulate AI. And luckily, governors just from both parties roundly ignored that. But there are areas where you're gonna see Washington need to step up. And I think whether or not they do so may dictate how this whole thing plays out. Stay tuned for more of this panel right after the break. And if you're enjoying the show, please follow our new ProfG Markets YouTube channel. The link is in the description. This is advertiser content brought to you by Virgin Atlantic. Ed, a couple weeks back, I got you a birthday gift not to pat myself on the back, but it was a pretty good one. It was indeed. You surprised me with Virgin Atlantic upper class tickets to London. So tell us all about it. It was pretty incredible. From the moment I entered that upper class cabin, I have to tell you, I felt like a VIP. Anything I needed, a drink, snack, assistance with the seat. Flat seats. Flat seats. That's the key. Flat seats, exactly. Had the four course meal, got my champagne, very delicious, enjoyed the food. And the journey home? The journey home was great. Then I went to the Virgin Atlantic LHR Clubhouse. That's the Heathrow Clubhouse. Heathrow Clubhouse was awesome. Got myself a coffee, headed over to the meditation pod that they call the soma dome. Kind of felt like a sort of spaceship where you relax and think nice thoughts. So I did that for a little bit. Then we went over to the wing, which are these acoustically sealed booths where you could do some work. You could even record a podcast. I didn't do that, but maybe I should have. It was a very enjoyable experience. So, Ed, the real question here is, what are you planning to get me for my birthday? See the world differently with Virgin Atlantic. Flying should be more than just transport. It is part of the adventure. Go to virginatlantic.com to learn more. Tickets and lounge access provided by Virgin Atlantic. Recommendations can't be amazing. I mean, maybe someone recommended that TV show you've been obsessed with lately. But when it comes to home projects, it's different. If you don't like a show, you might lose a few minutes. If you hire a friend, of a friend, of a friend to fix a leaky ceiling, you could end up with a flooded kitchen. Maybe I know a guy just isn't enough for your home. That's why Thumbtack works so well. They'll match you with a top rated local pro, and you can see photos of past work, credentials and reviews all right in the app. For your next home project, try Thumbtack. Hire the right pro today. You hear a lot of talk about AI replacing humans. Curiosity invites a better question. How will humans shape AI? That's something SAS has been working on for decades. They're celebrating 50 years in data and AI, and long before responsible AI was trendy, they were building systems around transparency, governance and trust. If you're curious about what responsible AI actually looks like, visit sas.com to learn more. That's SAS.com. We're back with Proficy Markets. It's a very difficult time in a lot of ways to be an AI executive because on the one hand, as you say, there is an economic incentive or maybe a fundraising incentive would be the right way to put it, to say that this stuff is going to be very damaging and it's going to just structurally completely upend the entire economy as we know it. But at the same time, I also wonder if they also actually believe that. That seems to be something that you also have to reckon with, especially in the context of a government, which seems pretty unwilling in general to promote any form of policy, any form of regulation. And if you're building in the AI space in that environment and you seem to recognize, this administration doesn't really want to do anything in terms of regulation, then maybe you do feel you need to sound the alarm and say, hey, this is going to be, this is actually a big deal. This is actually going to be a problem. And then on our end, it becomes very difficult to understand what's true and what's marketing and what's hype. So I guess, I mean, Brian, just turning it to you, which parts of the story do you think are real? I mean, when Sam Altman goes out and says, yes, this is going to be massively destructive in a lot of ways or when Dario Amade says that, I mean, to what extent should we take that seriously versus write it off as marketing? Yeah, I mean, I think you're absolutely right that both of those tendencies are kind of bound up in this same trajectory. And part of this is necessity, right? Like the tech landscape is such that if somebody wants to release a product that can compete with one of the giants like Meta or Amazon or Google, then you need just truly an immense amount of capital. If you want to compete rather than angle to get bought up or something. So you need a story that can command the kind of capital that can compete with one of three or four of the tech oligopolies that are out there, right? The tech monopolies that have sort of over the last 20 years sort of concentrated their power. And so that story then becomes not just, hey, here's a cool product, that's not going to get you there. You need a story that is on the magnitude of we are creating that software that can automate every meaningful job. And that language is right there in OpenAI's charter still to this day. You can look at that is intrinsic to the pitch to investors. And so I think there are a number of different factors there. I think if you look at the last 10 years of the history of sort of this latest AI boom, then you really see it beginning in earnest around at least expressed fears about X-Risk and the possibility as sort of presented by Nick Bostrom and others that AI could become super intelligent and become this danger. I think one of Sam Altman's key intuitions was that, early on when he was just sort of heading up, just quote unquote, heading up Y Combinator, he sensed that there was a lot of energy here in this space that he could tap into one way or the other. And so he reached out to Elon Musk and kind of mimicked this language and was able to sort of use that concern as a lightning rod to get some interest in power and momentum into AI in general. And then from there, it's hard to walk away from that narrative. You see that the more you talk about it, that it does affect investors. It does sort of compel people to pay attention. It does get headlines. And so I think it does sort of balloon on and on and out. So some of these guys, I think like Dario Amade, I am sure he's legitimately concerned about all of this stuff. Is his marketing department aware that he can win a round of headlines by expressing that concern in the release of mythos? Of course they are. So they present every sort of white paper, every released or unreleased model, with the same sort of level of gravity as though it were sort of a new set of promotional materials. And so it becomes difficult to distinguish between the two. But I would say it is yes and both. And now we're in this pickle where the AI industry can't really walk away from its promise that has attracted so much investment in the first place. And they can't say, you know what, we're not gonna automate all the jobs. Then SoftBank might say, well, then what was that $30 billion for? So it really, we're sort of up on the brink and the precipice here. And I think Bradley was absolutely right. It's not just the politicians, it's also the AI industry, Meta and OpenAI and all these guys are bank rolling packs right now to the tune of $100 million to sort of influence elections. They supported the moratorium to ban state level AI lawmaking. So the very least they could do if they wanna deescalate the rhetoric, as Sam Altman says, is stop interfering in the democratic process, right? Is to let voters feel empowered, it feels some sway over this technology that is being integrated into every port of society. Yeah, it's a great point. I mean, if there's one thing that's gonna make you dislike AI even more, it's to read a headline that Mark Andreessen is bank rolling millions of dollars into these pro AI super packs that we are continually starting to read more and more about. And Bradley, what is the right policy response here? I mean, what we've kind of identified is that we don't seem to have much regulation at all. Americans are very scared, they're getting increasingly angry about it to the point where we are seeing literal violence against these tech CEOs. Like, what are we supposed to do this from a policy perspective? Yeah, I think you almost have to think about it from a taxonomy of how to regulate AI because I've never, you know, I've been working around politics for over 30 years and there's never been anything quite like this. So there's, in my mind, kind of four different categories. The first is consumer protection. And that typically tends to be the province of state and local government. So that's things like regulating chatbots, especially around things like mental health, regulating data centers and the negative externalities that can post on others, regulating the use of AI and hiring decisions, things like that. The second would be catastrophic harm. Like we've said, California and New York have tried to pass regulations or have passed regulations around frontier models, but that's two of 50 states. And this is the kind of thing that really should be done by the US government. The EU has a framework that covers, you know, 22 countries. We have two states. So that's number two. Number three would be jobs. And I don't think there is any plan whatsoever for how to deal with the fact that we could be seeing 10, 20% unemployment at some point because of AI. And look, I do believe that at some point in 20 years or whenever it is, all kinds of new industries that we can't conceive of today will be created that will have a lot of jobs thanks to AI. But a lot of people are gonna fall through the cracks. Look, that's why I think Andrew Yang was right way back, you know, a decade ago when he proposed universal basic income because I think that we are going to be in a world. And I will say, I just saw a white paper the other day, Daniel Shriver, who's the founder and CEO of Lemonade, which is an insurance tech company, funded a study in Israel that had the idea of basically creating a new type of tax that as corporate profits increase because they've reduced headcounts, taxing that as sort of a VAT and then redistributing that to people and what he calls the negative income tax, but effectively as a form of universal basic income. So there are ideas out there, but you have to think about them. Right now politicians just say job training, but like we can't all become plumbers. That's not gonna solve the problem. And then the fourth would be, you know, where AI can do good. So if you think back to Doge and it was a total disaster, but where Doge could have been really great is how do we bring AI into government to do things like procurement, compliance, licensing, permitting, data management, facilities management. There are a lot of ways that we could make our government a lot more efficient and a lot more cost effective. And so the challenge is you have to be able to think about all of these different categories at the same time. And that really requires thoughtful leadership. And because we live in a world where I believe every policy outcome is driven by, you know, a political input, politicians are thinking about their next election. They're thinking really about their next primary basically. And they're not thinking about all of the different complications that we just outlined. And so, you know, this is a time where we really need truly transformational leadership at all levels of government. And by and large, we don't really have it. One final piece, at least a small measure that I'm trying to do, which is to use AI in a way to cut against some of that institutional power out of my foundation, we're coding a tool called How to Create Societal Change that will be an agent where you can put in there, okay, I wanna ban cell phones at my kid's school. I wanna stop sign on my corner or whatever it might be. And then the agent trained on basically my, you know, decades of all of our work here will say to you, okay, great, here's the current law that governs cell phone use of your kid's school. Here's Susan Chargiot, here's what it would need to say. And then here's a full campaign plan for how you as an individual could go about changing it and it will be totally free. So it's a very small act of defiance, I get that, but we are in the process of coding it right now. And my hope is to release it in the fall. I mean, just to follow up on what policy makers and politicians should be doing, how should you be positioning yourself as a politician? I mean, we've seen that Bernie and AOC have they've been like, stop the data centers, period. Right. And they've said, I mean, people are saying they wanna end AI outright, that's not quite true. That basically the idea is until we have a framework of policy, press pause no more. And I guess the question becomes like, what is going to be the popular thing to do? Should you be super against AI? Should you be pro AI, pro innovation? I mean, that seems to be like the big question. I guess I'll follow up and ask that question to you Bradley as someone who's worked in exactly this space. What would you be doing? Yeah, I mean, the question, it sort of depends on what you're running for. So if you are a member of Congress, let's say, and your district is gerrymandered, it's just true for all of about 25 of them at the house and turn on your primary is gonna be 10%, 12%, something like that. Odds are being radical like an AOC or Bernie might be or on the far right too. And just opposing AI in all forms, probably is the right political play. Now, if you're running for Senate or governor or president where there's a large or electorate or potentially competitive general election, then you can't quite be so extreme and you need more nuance. I actually do think that, and this might be very naive and maybe I'm just falsely hoping for this, but I could see a world in 2027 where a Democratic House, a Republican White House and probably a Republican Senate, but we'll see, actually do manage to get together and come up with a comprehensive bipartisan deal around AI, not necessarily because they even care about the problems that the three of us do and that we're talking about here, but simply that if they fear that 2028 is gonna be the AI election and it looks like they haven't done anything about it, none of them wanna have to go stand before the voters and say, oh, well, I couldn't do anything, don't blame me. And so I do have this hope that simply because there's so much attention focused on it and so much anxiety around it, that this might be the one place where everyone actually could get together and come up with some thoughtful ideas. Yeah, it seems to be one of the few issues in which both sides kind of agree in its general dislike of it, or at least anxiety towards it. Yeah, I mean, if you look at something narrow, like the dozen or so states that have passed chatbot restrictions and regulations, those are totally bipartisan, both in terms of who's voting for them in the bill itself and then the types of states do it. Yeah, Brian, I mean, just going back to the Luddites and just for context for people, I mean, this is what happened when the factory was introduced and then you had all these textile workers in England who revolted, they smashed up the machines, et cetera. I mean, in a sense, I wonder if this is just what happens, like when a new technology arrives, you have violence, you have disruption, you have chaos, but also maybe not, and maybe there's a way that you're supposed to prevent this. I mean, what lessons can we learn from that period of history and how should we take it moving forward? Yeah, no, it's absolutely not a given that we'll see violence and mass disruption at this scale. There are a couple of things that tend to signify that you will see it, right? When you have an immense concentration of capital and power and the development and deployment decisions around a technology are flowing expressly from that and being imposed anti-democratically on a population, you're much more likely to see sort of angry uprising and rebellion. And again, it's another way that this moment sort of maps relatively and worryingly neatly onto the Luddites and the dawn of the Industrial Revolution because at that time you had this moment where automated machinery was beginning to be produced en masse and factory owners or to be factory owners realized that they could amass a bunch of these machines, put them in those early factories and divide and automate labor in a way that could break the power of sort of the workers and the guilds, they weren't actual guilds, but the industries and the cottage industries that had developed and had shared interests. And so when you have all of that sort of power and decision-making capacity and money sort of concentrated in a few hands, it is a recipe for disaster because the cloth workers, they went to parliament for years and years and years for a decade, full decade running up to the actual Luddite rebellion saying, look, the new factory owners, they're using these machines in ways that violate the laws on the books, they're hiring workers that haven't been apprenticed, that shouldn't be allowed to work, all these things that we have to regulate the trade, they're ignoring all the laws, all the standards, all the norms and then they're just pushing down our wages and pushing down our quality of life, they're destroying our livelihoods and they won't stop. And so here's a list of things that you could do to fix that, funny enough, one of the things that they proposed was very much like a Andrew Yang style VAT where it was like, why don't you tax the extra amount of cloth that a machine can produce and then use that to sort of fund, like a general fund for workers who need to reach, but they were laughed out of parliament, right? Time and time again, parliament not only said, no, we're not gonna listen to you, they tore up those laws and regulations on the books and basically left it completely up to the sort of the whims of the market and these very powerful actors. And so when you have a situation like that, which increasingly is mirroring what's happening today with an industry that has a ton of power, at least right now in its alliance with the Trump administration has sort of the ear of David Sacks and sort of the insiders in the administration and they're working very closely together to do what they're gonna do regardless of sort of popular will. And you have all these efforts to overturn local laws and things like that, then yeah, it does start to be this period where people look at that and say, well, what can I do? What are the options for me on the table? I voted, I told my council member, don't vote for this. A hundred people showed up at this event and said, please don't vote for this. And they did it anyways because the industry convinced them or they thought it was the right thing to do, but suddenly it looks like I don't have a say. I don't have any power. I don't get a vote in how the AI future is going to unfold. And if I'm in Gen Z where the negative sentiment towards AI is overwhelming, the NBC poll that just came out, it was like 44 points underwater for people who aged 18 to 34. They hate it because they're looking at the headlines and it's saying this is the worst job market for entry level jobs in 37 years. AI's taking all the jobs. So yeah, what are you gonna do? Are you just gonna kind of sit down and say, well, I guess I don't get a job. I guess the data center is gonna get put up in my backyard. So in this sense, I feel like the industry, politicians, everybody should be paying close attention to those very genuine and very rational feelings of agreement over what's happening and what's happening to their futures too. Right. If this isn't the wake up call that people need, then I really don't know what is. Bradley Tusk, Brian Merchant. I could talk about this for hours, but we do need to wrap it up here. I appreciate both of you. Appreciate your time. Thank you so much for joining us. Yeah, thanks for having us. Yeah, thanks for having me. Okay, that's it for today. We appreciate you joining us for another ProfG Markets panel. If you have a guest you think we should speak to on this topic or any other, please drop us a line in the comments or email our producer, Claire, at markets at profgmedia.com. We hope to hear from you. This episode was produced by Claire Miller and Alison Weiss, edited by Joel Paterson and engineered by Benjamin Spencer. Our video editor is Brad Williams. Our research team is Dan Chalan, Isabella Kinsel, Chris Nodonahue, and Mia Silverio. And our social producer is Jake McPherson. Thank you for listening to ProfG Markets from ProfG Media. If you like what you heard, give us a follow. I'm Ed Elson. I will see you tomorrow. Support for today's show comes from Dell. Dell PCs with Intel inside are built for the moments you plan and the ones you don't. There for those all night study sessions, the moments you're working from a cafe and realize every outlet is taken. At times you're deep in your flow and can't be interrupted by an auto update. That's why we build tech that adapts to you, built with a long lasting battery so you're not scrambling for an outlet and built in intelligence that makes updates around your schedule, not in the middle of it. Find technology built for the way you work at Dell.co.uk forward slash Dell PCs. Built for you.