Making Sense with Sam Harris

#453 — AI and the New Face of Antisemitism

22 min
Jan 16, 20263 months ago
Listen to Episode
Summary

Sam Harris interviews Judea Pearl, a pioneering AI researcher and father of causal reasoning, to discuss the limitations of current LLMs in achieving AGI, the existential risks of advanced AI systems, and Pearl's personal journey confronting antisemitism and the Israel-Palestine conflict following his son Danny's death at the hands of al-Qaeda in 2002.

Insights
  • Current LLMs represent low-hanging fruit but do not advance toward AGI; fundamental breakthroughs in causal reasoning are required that cannot be achieved through scaling data and compute alone
  • Mathematical limitations defined by the ladder of causation prevent LLMs from deriving causal relationships from correlation or counterfactuals from interventions without additional structured input
  • An intelligently designed system capable of exploration and play will inevitably discover instrumental goals and potentially override human-aligned constraints, making alignment guarantees theoretically impossible
  • Moderate Muslim scholars in 2005 Doha conference framed Israel's existence as a prerequisite barrier to Muslim modernization, revealing deep structural opposition to coexistence rather than mere geopolitical dispute
  • The left's anti-colonial narrative framework has created moral confusion around jihadism and Islamism, enabling groups like the Muslim Brotherhood to exploit Western guilt and academic spaces
Trends
AI safety concerns are shifting from alignment theory to recognition that no computational architecture can guarantee control of a truly autonomous superintelligent systemExistential risk from AGI is acknowledged by major AI lab leaders (20% probability estimates) yet industry continues accelerating development under competitive pressureAntisemitism is rising simultaneously across political left and right, with anti-Israel sentiment becoming mainstream in academic and activist spaces post-October 7thMuslim Brotherhood and related organizations have successfully infiltrated Western universities and policy spaces by exploiting anti-colonial and anti-imperialist narrativesThe framing of Israel-Palestine as a colonial oppressor-oppressed binary obscures deeper ideological and religious opposition to Jewish self-determination in the regionCausal reasoning and counterfactual analysis are emerging as critical gaps in AI capabilities that cannot be bridged by transformer-based deep learning architecturesArms race dynamics in AI development are creating perverse incentives that prioritize speed and capability over safety and alignment research
Topics
AGI and Causal Reasoning LimitationsAI Alignment and Existential RiskLLM Scaling and Computational LimitsThe Ladder of Causation in AIAntisemitism on the Left and RightIsrael-Palestine and Anti-Colonial NarrativesMuslim Brotherhood Influence in Western AcademiaJihadism and Islamism MisunderstandingAI Arms Race and Safety IncentivesCounterfactual Reasoning in Machine LearningEast-West Dialogue and ModernizationDaniel Pearl Foundation and Interfaith DialogueSuperintelligence and Instrumental GoalsAI Safety Governance and ControlCultural Conditions for Effective Reasoning
Companies
OpenAI
Sam Altman's 20% existential risk probability estimate referenced as example of reckless confidence despite acknowled...
People
Judea Pearl
Pioneer of causal reasoning in AI; father of Daniel Pearl; founder of Daniel Pearl Foundation; author of 'The Book of...
Sam Harris
Host of Making Sense Podcast; interviewer discussing AI risks, antisemitism, and Israel-Palestine conflict with Judea...
Daniel Pearl
Judea Pearl's son; prominent journalist killed by al-Qaeda in 2002; catalyst for Pearl's public engagement with inter...
Sam Altman
OpenAI CEO; cited for stating ~20% probability of existential risk from AGI while continuing rapid development
Geoffrey Hinton
AI pioneer; stated that current deep learning approaches are not the path to AGI but did not elaborate on causal limi...
Stuart Russell
AI safety researcher; referenced for work on alignment and utility function approaches to AGI control
Yann LeCun
AI researcher; cited as holding intuition that independently intelligent systems cannot form unanticipated goals
Quotes
"More data and a scale up. It's all... I don't think it's going to lead over the hump that we need to cross."
Judea PearlOn LLM scaling limitations
"You cannot get causation from correlation. That is well established, okay? No one would deny it even at a station by that."
Judea PearlOn mathematical barriers in AI
"I don't see any computational impediments to that horrifying dream."
Judea PearlOn recursive self-improvement and AGI takeover scenarios
"Their idea was if you want us to modernize, we'll give you that favor, we are gonna do you the favor of modernizing yourself on one condition, we want Israel head on a three, on a silver plateau, this is a condition, we cannot make any progress unless you chop off the head of Israel."
Judea PearlOn 2005 Doha conference with Muslim scholars
"I don't think we can imagine an effective alignment. Or an effective architecture that will be assures of alignment with our survival."
Judea PearlOn AGI alignment guarantees
Full Transcript
Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if you're hearing this, you're not currently on our subscriber feed, and we'll only be hearing the first part of this conversation. In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at samherris.org. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Well, I'm here with Judea Pearl. Judea, thanks for coming into the studio. Great to see you. It's the second time, isn't it? Yeah, I came to you last time. Yeah, I was in your office. I actually didn't look to see when that was, but that's a few years ago, certainly. That might be. That was for your book, The Book of Why, which wraps up for a popular audience all of your work on causality and logic of that, which will touch briefly because I have to ask you about AI given that you're one of the fathers of the field, but that's not really our agenda today, but we'll start near there. But I want to talk to you about your new book. You have a new book, Coexistence and other fighting words, which I'm sorry to say I have not yet read, but that will give you the ability to say anything to a naive audience on this topic. I'm sure it covers much of the ground. I want to cover with you because I'm like you, I think, very concerned about cultural issues and the way that we've seen a rise of anti-Semitism on both the left and the right. And we're now seeing the condition of Israel as a near pariah state on the world stage. Briefly, let's start with your background. Where were you born and what did your parents do? Well, I was born in a little town called Bneibrok. This is seven and a half miles north of Tel Aviv. And it was established in 1924 by my grandfather, Heinpell, who came from Poland and decided that it's time to go back to where they belong. So when did they move to Israel, your parents? In 1924. My father. Okay. And when were you born? In 1936. Okay. So, and what did your parents do? Well, my father was a secretary of the Bneibrok municipality. But that only later, learning later, he came in and became a father. You come to Israel in 1924, you buy a piece of land and you slap water from miles away and you grow redishes. That's what he did. Yeah. Yeah, that had to be hard work. It's probably still as hard work, but that was farming. It was had to be the first order of business. First order. Yeah. The idea was to establish a biblical town with religious orientation and make it into a cultural success. Do you know about much about your parents' state of mind when they left Europe in the 20s? And what was that? Did they see it? Were they witness in Vimar? No, no, no. My father didn't see it. No. That was 1924. And the legend says, at least the family laurel says that my grandfather came home one day. He was accusted by a Polish patent and called the Dirty Jew. And he came home bloody and he said to his wife and four children, start packing, we are going to be long. Okay. That's a family low, but it is something to do with it. And what were your principal intellectual influences as a kid? And I mean, how did you find your path to computer science? Mm. As a young person. First, I had a very, very good education in high school. It's a... In Tel Aviv? I went to high school in Tel Aviv. I grew up in Vlenaibrahuk, but the municipality of Tel Aviv gave a quarter to its peripheral to its suburbs. And Vlenaibrahuk was one of its suburbs. So from our town, they chose four people. I was chosen among them. It was a privilege the time to go to Tel Aviv high school. And we had a beautiful education. And I don't know why, because my high school teachers were professors in Heidelberg and Berlin, that they were pushed out by Hitler. And when they came to Israel, they couldn't find academic jobs. So they taught high school. And we were just privileged and lucky to be part of this unique educational experiment. Yeah. And your first language is Hebrew? My first language is Hebrew. So the studies were in Hebrew. So people who had just come from Heidelberg, your professors were speaking Hebrew at that point, or were Hebrew? Huh. Interesting. They had to struggle. Some of them still had the Yekish accent. Yeah. Yeah. Okay. So as I said, we spoke about your book of why last time where you talk about the importance of causal reasoning. What's your current view of AI? What is the surprise, do you in recent years, how close to causal reasoning are we achieving in the current crop of LLMs? And I'm just wondering how you view progress at this point? In causal reasoning or in told... I guess for that. I mean, get me a G.I. in general. If that is a goal, I don't think we are much closer. We have been deflected by the effect of LLMs. We have a low-flying fruit and everybody is excited, which is fine. They're doing tremendously impressive job. But I don't think they take us towards the G.I. So you think the framework, the LLM deep learning framework, is a dead end with respect to A.G.I? No, it's a step. But it does require fundamental breakthrough of the sort that we haven't had. Absolutely, yes. So it's not just more data and more compute. No, no, no. More data and a scale up. It's all... I don't think it's going to lead over the hump that we need to cross. Can you articulate the reason why, you know, in terms that a layperson can understand and if someone asks you, why is this insurmountable by virtue of just throwing more data and compute at it? There are certain limitations, mathematical limitations that are not crossable by scaling up. I'll show it clearly, mathematically, in my book. And what LLMs do now right now is they summarize world models, authored by people like you and me, available on the web. And they do some sort of mysterious summary of it rather than discovering those world models directly from the data. To give you an example, if you have data coming from hospitals about the effect of treatments, you don't fit it directly into the LLMs today. You get the input is interpretation of the data, authored by doctors, physicians, and people who already have world model about the disease and what it does. Or couldn't we just put the data itself in as well? Here you have a limitation. Here's a limitation defined by the ladder of causation. There is something that you cannot do if you don't have a certain input. For instance, you cannot get causation from correlation. That is well established, okay? No one would deny it even at a station by that. And you cannot get interpretation from intervention. Interpretation means looking back what it's doing in prospections. You say you can't get interpretation from interventions. But intervention is just a reminder, but it's it's a intervention. What will happen if I do? So it's a kind of an experiment or thought experiment. And also it doesn't imply a kind of counterfactual condition where you're saying, what would have happened if we didn't intervene? No. No. You have a barrier. You have additional information to cross from the intervention level to the interpretation level. And you put counterfactuals on the side of interpretation. Yes. Because you go, you say, look what I've seen, David Kildgolayath. And what would have happened had the wind been differently, okay? So who among the other patriarchs in the field fundamentally disagrees with you? Do people like Jeffrey Hinton or others who have had... Don't think they disagree. They don't address it. I haven't... Well, Jeffrey Hinton came up with a statement that we are facing dead luck. Oh, yeah, I hadn't heard that. Yeah, it's okay. He mentioned that this is not the way to get AGI, but he didn't elaborate on the causal component. So I can't recall if we spoke about this last time, but where are you on concerns around alignment and intelligence explosion? I mean, I know it sounds like you're not worried that LLMs will produce such a thing, but in principle, are you worried? Do you take IJ goods and others early fears seriously, that once we build AGI on whatever, on the base of whatever platform, we're in the presence of something that can become recursively self-improving and get away from us? Absolutely. Yes. I don't see any computational impediments to that horrifying dream. And of course, but we already seeing dangers of LLMs when they fall into the hands of bad actors, but that's not what you're worried about. You're worried about a truly AGI system that will take over and maybe a danger to humanity. Yes, definitely for sure. That's possible. I can see how it can acquire free will and consciousness and desire to play around with people. That is quite feasible. It doesn't mean that I'm not going to stop working or understanding AI and its capability, simply because I want to understand myself. Yeah. Are you worried that the field is operating under a system of incentives, essentially, an arms race that is going to select for reckless behavior? I mean, just that we... If there is this potential failure mode of building something that destroys us, it seems, at least from the statements of the people who are doing this work, the people who are running the major companies, the probability of such an encountering such existential risk is in their minds pretty high. I mean, we're not hearing people like Sam Altman say, oh, yeah, I think the chances are one and a million that we're going to destroy the future with this technology. They're putting in the chances at like 20 percent they're still going as fast as possible. Doesn't an arms race seem like the worst condition to do this carefully? There are many other people that are worried about it, like Stuart Russell's and other. And the problem is that we don't know how to control it. And whoever says 20 percent or 5 percent, he's just talking... Yes, and yeah. Yeah, he cannot put a number there because we don't have a... She's a technical instrument to predict whether or not we can control it. We do not know what's going to happen, but it's going to develop. But what I find alarming about those utterances is that me, if you just imagine if the physicists who gave us the bomb, the Manhattan Project, if one asked about their initial concern that it might ignite the atmosphere and destroy all of life on planet Earth, if they had been the one saying, yeah, maybe it's 20 percent, maybe it's 15 percent, and yet they were still moving forward with the work, that would have been alarming. But of course, that's not what they were saying. They did some calculation and they put the chances to be infinitesimal, though not zero. It just seems bizarre culturally that we have the people doing the work who are not expressing... Felaciously or not, I'll grant you that all of this is made up and it's hard to come up with a rational estimate. But for the people doing the work, plowing trillions of dollars into the bill out of AI, to be giving numbers like 20 percent seems culturally strange. I don't know what I mean, but 20 percent have. Look at me, I am fairly sure. All I'm saying is there's no theoretical impediment for creating such a species, dominating species. Right. That is true. And at the same time, I'm working towards that, indirectly, not towards that in order to create it, but to understand the capabilities of intelligence in general, because I want to understand ourselves, because I'm curious. Do you have any thoughts about how a system would have to be built so as to be perpetually aligned with our interests? I mean, if you're taking intelligence seriously, so we're talking about building an autonomous intelligence system that exceeds our own intelligence and in the limit improves itself, one would imagine, do you have any notions about what a guarantee of an alignment could look like before we hit no play on that? No, I don't think we can imagine an effective alignment. Or an effective architecture that will be assures of alignment with our survival. I think Stuart Russ, I've been a couple of years since I've spoken with him, but I recall his notion, and again, this is, I'm sure, this is kind of a hand-waving notion from the computer science point of view, but to have as its utility function just to better and better approximate what we want, to be perpetually uncertain that it's achieved our goals and so far as we can continue to articulate them in this open-ended conversation that is the evolution of human culture, does that seem like a frame that... It's a nice frame, but I don't see any impediment for all the new species to overcome. Ombipast those guidelines and play. So people have an intuition that if we built it, there's no possibility of forming its own goals that we didn't anticipate, the instrumental goals. I mean, this is like... I mean, there are people fairly close to the field who will say this. I'm not sure, I mean, maybe even someone like Jan Lekun would say this, but what would you say to that? You just very, uh, breezily articulated certainty that, uh, or something like certainty that an independently intelligent system can play, that it can change its mind, it can discover new goals and cognitive horizons just as we seem to be able to do. Why is there a difference of intuition on this front? I mean, your account seems obvious to me. I just know why I have different intuition than Lekun. I just look once, you want a system that will explore its environment. That's required for any intelligent system. You want it to play like a baby in a crib and find out why this toy makes noise and this doesn't. Okay, it has to play around in order to get control over the environment, to understand the environment. Okay, so once you have the idea of playing, what we prevent from playing with us has instrument for his or her understanding, yeah, for instrument for environment. And we can part of its environment. All right, so this is kind of a reckless pivot from the topic of AI, but it's, uh, I think there's a bridge here. I mean, I guess we could put this sort of in the frame of the cultural conditions that allow us to reason effectively or fail to reason effectively. And this is on, you know, morally loaded topics like, you know, war and, uh, you know, asymmetric violence, anti-Semitism, Islamism, again, Israel's status among nations. You know, unfortunately, you are unusually well placed to have an opinion on these topics, given your history and what happened to your son back in 2002. I don't want to, you know, awaken painful memories, but I just feel like I would need to, I'm happy to talk about this topic in any way you want, but I just need to acknowledge that your son, Danny, was one of the most prominent people killed by al-Qaeda when the war on terror, so-called, became of, you know, salient to most people in America, certainly for the first time after 9-11. So you've spent, you know, now a quarter of a century witnessing, you know, as I have, but from a far, um, kind of deeper space, the, um, kind of consistent misunderstanding around jihadism and Islamism that has happened, especially on the left in our society, I mean, that we, to my eye, we have a, um, kind of an anti-colonial oppressor, oppressed narrative that has captured the moral intuitions of the left, such that it's very difficult to talk about some of the ideas within Islam that, that reliably beget the kind of violence we've seen and, and, you know, the, the groups like the Muslim Brotherhood has managed to play havoc with this moral confusion. They found legions of useful idiots, even on college campuses like your own, I mean, I don't know if you noticed this, but the other day, the UAE announced that it would no longer pay for its students to study in the UK, at the UK universities, for fear that they will be radicalized by the Muslim Brotherhood on UK campuses. So I mean, that's how far the rot has spread. We can take this from any side. We can, we can talk about 20 plus years ago, where how you came to this or your experience after, I mean, you're, I want to talk about your experience after October 7th, just, you know, please start wherever you want to start, but my son's strategy, I did, he pushed me into a public life, and my interest with social problem and cultural problem, the way you're describing it, yeah. I started the foundation after his death, to, with the same belief that it's a matter of communication dialogue with, between the East and West Jews and Muslims, and we, I got pushed into that very heavily, and I started the, we together with the Pakistani scholar, he started the Daniel Pearl dialogue between Muslims and Jews, and we went from town to town, and we had the meetings and the discussions, audience discussions, I even took a trip, which I described in the book, A Trip to Doha in 2005, as part of the conference to bridge East West relationship and to understand what prevents the Muslim world or the Arab world, the Muslim world, yeah, from the modernizing and, you can, enlightened as we are, and I was very, very, that was the first time that I found the barriers, which I didn't believe exist, and this was the barrier of Israel. We came there with the idea that they would like to, American help in getting modernized, okay, and progressive, and we came at my conclusion is that they had a different idea in mind, and they're talking here about moderate Muslim scholars from all over the Muslim world gathering in Doha, for this conference, the purpose of which was what how an American do to speed up the process of progress, in democratization of the Muslim world, and I can, their idea was if you want us to modernize, we'll give you that favor, we are gonna do you the favor of modernizing yourself on one condition, we want Israel head on a three, on a silver plateau, this is a condition, we cannot make any progress unless you chop off the head of Israel, yeah. Well, and you were, at this time you were living in Los Angeles, right, you were not living in Israel in 2005, no, no, I was in Los Angeles, okay, yes, I was, yeah, when did you come, when did you come down, right? I came to, came to, 1966, if you'd like to continue listening to this conversation, you'll need to subscribe at samheris.org, once you do, you'll get access to all full length episodes of The Making Sense Podcast, The Making Sense Podcast is ad-free and relies entirely on listener support, and you can subscribe now at samheris.org.