An engineer around a corner whenever you need British gas have over 6000 on-route at speed fixing lights that won't light or have started to blink a pipe with a leak and that weird smell under the sink. If your boilers could put and your blue fur needs a rinse, we've got your back to stop that cold water rinse. You don't need to be a customer, we can help you too, taking care of things. It's what British gas do. TCC Supply 6000 Engineers Correctors of Jan 2026. In 1985 and at the tender age of 22, I played against 32 chess computers at the same time in Hamburg, West German. Believe it or not, I beat all 32 of them. Those were the golden days for me. Computers were weak and my hair was strong. But just 12 years later, in 1997, I was in New York City fighting for my chess life against one machine, a 10 million dollar IBM supercomputer named Deep Blue. It was actually a rematch. I like to remind people that I beat the machine the year before in Philadelphia. And this battle became the most famous human-machine competition history. Newsweek's cover called it the Brain's Last Stand. No pressure. It was my own John Henry moment. But I lived to tell the tale. Flora of Books compared the computer's victory to the right brothers first flight and the moon landing. Hyperbola of course, but not out of place at all in the history of our laugh-hate relationship with so-called intelligent machines. So, our repeating that cycle of hype and hysteria, of course, artificial intelligence is far more intelligent than all chess machines. Large language models like Chad GPD can perform complex tasks in areas as diverse as law are, and of course helping our kids cheat on their homework. But are these machines intelligent? Are they approaching so-called AGI or artificial general intelligence that matches or surpasses humans? And what will happen when they do if they do? The most important thing is to remember that AI is still just a tool. As powerful and fascinating as it is, it is not a promise of dystopia or utopia. It is not good or evil. No more than any tech. It is how we use it for good or bad. From the Atlantic, this is Otokos in America. I'm Giarika Sporff. My guest is Gary Markus. He is a cognitive scientist whose work in artificial intelligence goes back many decades. He is not a cheerleader for AI. Anything but. In fact, his most recent book is called Taming Silicon Valley. How we can ensure that AI works for us. He and I agree that humans not machines hold the monopoly on evil. And we talk about what humans must do to make sure that the power of artificial intelligence doesn't do harm to already fragile democratic systems. Gary Markus, welcome to our show. This is the Gary show. You are an expert on artificial intelligence. And you have worked on it for many decades, starting at a very young age. So before we talk about AI, I have to ask you, back then, in 1997, who you were rooting for? Who was rooting for? Me or the blue? But you're on the police. No bad blood. You know, in 1997, I had become disenchanted with AI. And I don't think I had really cared that much. I knew that eventually the Chast machine was going to win. I had actually played deep blues, predecessor, deep thought. And it had kicked my ass, even I think with its opening book turned off or some humiliating thing like that. Not that I'm a great chess player. But you know, I saw the writing on the wall. So I wasn't really rooting. I was just watching as a scientist to see like, okay, when do we sort this out? And at the same time, I was like, yeah, but that's chastising you can brute force it. And that's not really what human intelligence is about. So I honestly didn't care that much. You said brute force, which all the progress being made. Would you say that machines are still reliant, almost exclusion on brute force, or we see some, you know, transformation from simple quantity into some quality factors. I mean, I hate to say it's a complicated answer, but it's a complicated answer. It is complicated answer. I would not ask you a simple question. I feel you're done. In some ways, we've made real progress since then. And in some not, the kind of brute force that deep blue used is different from the kind of brute force that we're using now. You know, the brute force that beat you was able to look at an insane number of positions essentially simultaneously and go several steep and so forth. And large language models don't actually look ahead at all. The large language models can't play chastises at all. They make illegal moves. They're not very good. But what they do do is they have a vast amount of data. If you have more data, you have a more representative something or other. So like, if you take a poll of voters, the more voters you have, the more accurate the poll is. So they have a very large sample of human writing. In fact, the entire internet and they have a whole bunch of data that they've transcribed from video and so forth. So they have more than all of the written text in the internet. That's an insane amount of data. And what they're doing every time they answer a question is they're trying to approximate what was said in this context before. They don't have a deep understanding of the context. They just have the words that don't really understand what's going on. But that deep pile of data allows them to present an illusion of intelligence. I wouldn't actually call it intelligence. It does depend on what your definition of the term is. But what I would say is it's still brute force. So let me come back to chess for a second. If you ask a large language model, even a recent one, to play chess, it will often make illegal moves. That's something that a six-year-old child won't do. And I don't know when you learned chess. I can't remember, but you were probably quite young. So I'm guessing you were four or something like that. Five and a half. Five. When you were five and a half, you know, pretty much immediately you understood the rules. So basically you probably never made illegal moves in your chess career starting when you were a little child. And O3 was making them this weekend. I asked a friend to go try it out. And when you were five and a half, you'd only seen whatever one game, two games, now 10 and whatever. There are millions of games, maybe tens of millions or hundreds of millions that are available in the training data. And Lord knows they use any training data they can get. So there's a massive amount of data. The rules are there. The Wikipedia has a rules of chess entry. That's in there. All of that stuff's in there. And yet still it will make illegal moves. Like have a queen jump over a night to take the other queen. Making mistakes, not mistakes, actually violating the rules. So again, just tell us how come why? Because the rules are written. And technically they can extract all information that is available. And they're still making illegal moves. Yeah. And in fact, if you ask them verbally, they will report the rules. They will repeat the rules because in the way that they create text based on other texts will be there. So I actually tried this. I asked it. I said, can a queen jump over a night? And it says no. And chess, a queen cannot jump over any piece, including a night. So it can verbalize that. But when it actually comes to playing the game, it doesn't have an internal model of what's going on. So even though it has enough training data that can actually repeat what the rules are, it can't use those in the service of the game because it doesn't have the right abstract representation of what happens dynamically over time in the game. Yes. It's very interesting because it seems to me that you know what you are telling us is that, you know, machines know the rules because rules are written. But you know, it still doesn't know what can be done or cannot be done unless it explicitly written. Correct. Well, I mean, it's worse than that. I mean, the rules are explicitly written. But there's another sense of knowing the rules, which we actually understand what a queen is, what a night is, what a rook is, what a piece is. And it never understands anything. It's one of the most profound illusions of our time that most people witness these things and attribute an understanding to them that they don't really have. Okay. So now I think our audience understands why you're often called an AI skeptic. But I believe AI rail is this better because I also share, you know, just your overall view of the future of AI and human machine collaboration. Let me just drop in. I love that you called me an AI realist rather than a skeptic. I share that and I always say AI is not a magic one, but it's not a terminator. It's not a garbage of utopia or dystopia. It's a technology. It doesn't buy your ticket to heaven, but it doesn't open the gates of hell. So let's be realistic. Yeah. So let me talk about the realism first and then the gates of hell. So on the realism side, I think you and I have a lot in common. We're both realists, both politically and scientifically. We both just want to understand what the truth is and, you know, how that's going to affect society and so forth. I mean, the fact is I would like AI to work really well. I actually love AI. People call me an AI hater. I don't hate AI, but at the same time to make something good, you have to look at the limitations realistically. So that's the first part. Is it going to open the gates to heaven or hell? That's actually an open question, right? AI is a dual use technology, like nuclear weapons, right? Can be used for good, can be used for evil. And when you have a dual use technology on the table, you have to do your best to try to channel it to good. But look, I also keep repeating that humans still have monopoly for evil. I think we can disregard the fact that every technology can be used for good or bad, depending on who is going to use it. And I think that the greatest threat coming from the AI world is potentially this technology being controlled and used by those who want to do us harm. Mostly agree with you there. First of all, neither of us are that worried about the machines becoming deliberately malicious. I don't think the chance of that is zero, but I don't think it's very high. Agree, we should be worrying about malicious humans and what they should, what they might do with AI, which I think is a huge, huge concern. We also have to worry because of the kind of AI that we have now that it will just do really bad things by accident, because it's so poorly connected to the world doesn't understand what truth is, it can't follow the rules of chess, etc. It can just accidentally do really bad things. And so we have to worry about, I think, the accidents and the misuse may be less about the malice. Now, let me ask very sensitive or just a question that does no scientific background. So why analyzing our chess decisions, we always say, okay, this part is being just made through calculation, this one through recognition of patterns. Now, in your view, what percentage of these decisions or suggestions made by AI based on calculations and what percentage is attained to understanding, I mean, I don't want to use what intuition, but recognition of patterns. So let's say strategy versus simple tactical calculation. So first thing I should clarify something, which is there are different kinds of AI out in the world. So for example, a GPS navigation system is all what I would call calculation and no intuition. It simply has a vast table of different locations and the routes that you can take between those places for different segments of it and the times that are typical and so forth, all calculation, nothing I would describe as pattern recognition. I would still call it AI. It's not a sexy piece of AI and it's not what most people talk about when they talk about AI right now. Most people are talking about chat bots like chat, GPD. When deep blue beat you, that was all calculation. Maybe you could argue there was a tiny bit of pattern recognition. Stockfish is now kind of a merger of the two. It's kind of a hybrid system, which I think is the right way to go. The things that are popular mostly aren't hybrids, although they're increasing, kind of sneaking some hybrid stuff in the back door. I would say they're not doing any calculation at all. I would say that they're all pattern recognition. A pure, large language model is all pattern recognition with no deep conceptual understanding and no deep representations at all. There's no deep understanding, even of what it means to jump a piece or illegal move. None of that is really there. So everything it does is really pattern recognition. When it does play chess, it's recognizing other games. There's an asterisk around this, which is they can do a little bit of analogy in certain context. So it's not pure memorization. It's not pure regurgitation, but it comes close to that and it's never kind of deep and conceptual. Before we move into politics, and I will just give you some statements and you tell me if I'm right or maybe they have to be corrected. So this infrastructure and this whole industry has not solved the alignment problem. Not even close. The alignment problem means making machines do what you want them to do or the things that are compatible with humans. And already we saw a great example, which is chess. You tell that I want to play chess here, the rules of chess, and I can't even stick to that. Now you get to something harder, like don't cause harm to humans, which is much more complicated to even define what harm means and so forth. They can't do that at all. There is no real progress, I would say, in the alignment problem. Adding more data doesn't help that much with the alignment problem. There's another thing called reinforcement learning. It helps a little, but we have nothing like a real solution to alignment. Okay. So the bottom line is that simply adding information or just cleaning this human data and just building these skyscrapers of these data doesn't help very much. So we reached a plateau. So the idea that it was simply keeping a pile more and more data and we'll transform this quantity into a quality to move to the next level. It doesn't work because again, there's no evidence. This kind of super intelligence is going to happen tomorrow or just in a foreseeable future. It's not going to work. We will get to super intelligence eventually, but not by just feeding the beast with more data. I thought what you're going to ask me was, is this field intellectually honest? And my answer is not any more. AI used to be an intellectually honest field, at least for the most part. And now we just have people hyping stuff, praying, there's actually great phrase I heard, pray and prompt, like you pray and prompt and hope you get the right answer. And it's just, it's not reasonable to suppose that these things are actually going to get us to AI, but the whole field is built on that these days. We'll do right back. There once was a woman who lived in a shoe, a size two snug book, what could she do? But that's not where her story ends. Thanks to a little help from her experience friends, she got her scouring to much better shape and relocated to a box fresh new place with room to grow and a mortgage to suit. Now she lives in a spacious four bedroom cowboy boot. Better your experience credit score to help get mortgage ready, experience better your score, better your story. Okay, so to support your reputation as AI realist and not be just on a negative side because you already said enough and I couldn't agree more with everything you just said, we have to support our reputation of those who believe that AI still brings something good into this world. So how do we benefit from AI's interference or infusion of virtually every aspect of our life? So I think there's, it's a multi-part answer because AI affects so many parts of our life. Right now, the best AI for helping people in my opinion is not the chatbot. The best piece of AI right now I think is alpha fold, which is a very specialized system that does one thing and only one thing, which is to take nucleotides in a protein and figure out what their 3D structure is likely to be. It may help with drug discovery. Lots of people are trying that out. That seems like a genuinely useful piece of AI and should be a model. But I would say of the big AI companies, deep mind is the only one seriously pursuing AI for science at scale. Most people are just like, well, can I throw a chatbot at something? And mostly that's not going to lead that much advance as opposed to creating special purpose solutions. I think we have to be intellectually honest about the limitations of this generation of AI and build better versions of AI and introduce new ideas and foster them. And right now we're in this place where the oxygen is being sucked out of the room as Emily Bender once said. And nobody else can really pursue anything else like all the venture funding is to do large language models and so forth. So there's the research side of it. There's like finding the right tools for the job side of it. There's also a legal side of it, which is if we want AI to be net benefits to society, we have to figure out how to use it safely and how to use it fairly and justly. If we don't, which is what's happening right now in the United States where we're doing nothing, then of course there's going to be lots of negative consequences. Negative consequences. I think the one place where we all feel this negative consequences is politics or things related to politics like propaganda and simply you know just sharing information. That's where AI plays a massive role because again we saw the influence of these various forms of AI being used to influence the elections and it seems unstoppable now. So just briefly. So what do you think? Anything can be done or we are just we have entered the era of this information wars that will be run by these chatbots and the sheer power behind them could at one point decide the results of any elections. This is a place where I may be an AI optimist although not short term. So I genuinely believe that in principle we can build AI that could do fact checking automatically faster than people and I think we need that. Right now it's sort of politically hot so nobody even wants to touch it but I think in the long run that's what we need to do. Think about like the 1890s with all the yellow journalism of people like Hurston and so forth. All bullshit. Some people think it led to war based on false facts and that led to fact checking being a thing and we may return to that because I think people are going to get disgusted by how much bullshit they are immersed in and I think in principle not current AI but future AI could actually do that at scale faster than people and I think that could be part of the solution eventually. Part of it is political will and right now we lack it. Like the far right has so politicized the notion of truth that it is hard to get people to even talk about it but I think that there will be a swing back in the pendulum that that's some day. Whether that happens in the United States this is a very complicated situation right now but I think the world at large is not going to be satisfied with the state of affairs where you can't trust anything. Dictators love it. It's great for them. That's why it was the Russian propaganda model. You know Putin loves the idea that nobody knows what to believe and so you just kind of have to go along with what he makes you do. But it seems to me that the political moment definitely in this country, also in Europe now is not very friendly to this notion so fact checking. Very unfriendly. People believe what they want to believe and unfortunately fake news have this element of sensationalism that always attracts the attention and I think lies become weapons on both sides. There's some blatant lies, there's some more you know just covert lies but at the end of the day I think no political meaningful political force in this country now is interested of defending the truth, defending the pure correct fact checked data because it may and most likely will interfere with their political agenda and also the facts they always lose in the battle of public opinion these days against fake news. I mean my one moment of optimism is we saw this before in the like 1890s and eventually people got fed up. It's not going to happen soon though right now people are complacent and apathetic and they have given up on truth. I could also be wrong in my rare moment of optimism. I think that things are going to get so bad that the people will resist but I mean that's an open question. At least once in history people did get fed up with that state of affairs. It is also true what you're saying lies do you tend to travel faster than truth and that's part of what happened in the social media areas that whole thing got accelerated right. The social media companies don't care about truth and they realize they would make more money by trafficking and fake narratives and that's part of why we are where we are now. Yeah you mentioned a couple of times 1890s and early the 20th century so as this one of the moments of transition. So what about the let's say mid 20th century with the booming sci-fi book industry had many many stories about the future influence of technology technology to dominate our society technology interfering with democracy. The great writers you know just to stay predicted that at one point we would have to deal with this direct challenge of technology in the hands of you to influence the opinion of many. Are we now at this point? I keep thinking about rewriting like word for word remake of 1984 which I think was written in the late 40s. You know we are exactly where Orwell warned us about but with technology that makes it worse. Large language models can be I don't know we call them super persuaders. They can persuade people of stuff without people even realizing they're being influenced and the more data you collect on someone the easier that job becomes and so we are exactly living in the world that Orwell warned us about. Okay so let's talk about tech brawls so they believe that all powerful technology could actually help to improve the society because society has too many problems they cannot be resolved any other way but to lead the public to educate the public to control the public mind to cure these problems. Is this right real and is it doable because some people even say that oh it's it may lead us to something called technofacism where while preserving all the elements of representative democracy we will end up in some kind of dystopian society where the few in charge of massive data will make election results predictable and bend them to their favor. I mean that's exactly what's happening in the United States right now is technofacism. The intent appears to be to replace most people's most federal workers with AI which is going to have all the problems that we talked about. The intent is to surveil them to surveil people to get massive amounts of data put it all together in one place accessible to a small oligarchy. I mean that's just what they're doing. This is not science fiction that could happen in 10 years. This is essentially the active thing that is happening right now that has been happening for the last few months. Question so is it inevitable? So how the society at large can resist this pressure from this new technofacism that has all the money that has control of technology and again let's be honest most of the public say they they care more about convenience rather than about you know they're you know the security of their devices. I mean for instance that's that's known that people want these devices this new technology to bring some short-term benefits. iPhones are the opias of the people. Exactly so because our reliance on these new devices because we we are willing to use the simplest passwords because the to do a complicated password is too you know time consuming so again ignoring even threats to our personal data. So can we rally enough people to make this threat? I think the default path is what you described. I would add privacy to it so people have given up on privacy if they won't do the basic things on security and they have given up an enormous amount of power and that power hasn't even just gone to the government. Power has really gone to the tech companies who have enormous influence over the government and unless people get out of their apathy that's you know certainly where the United States is likely to stay. It's only if there is mass action and if people realize what has happened to them there were huge protests specifically directed towards Elon Musk and he was kind of as far as I can tell push to side. You know those protests were somewhat effective in mitigating some of the more egregious things that he tried to do. So he's at least kind of not at center stage anymore but short of that I think that default is the sort of dark world that we're talking about that you know reminds me a lot of contemporary Russia where few people have most of the power. Most people have essentially no power and to a surprisingly large degree people just consent to that giving up their freedom giving up their privacy maybe giving up their independence of thought as these systems start to shape their thoughts. To me that's extremely dark but not everybody seems to understand what's going on and less more people understand what's going on this is you know where we're going to stay. Yes so to wrap it off can you give us just you know some glimpse of hope any idea how can we fight back by using enormous power that AI and all these devices give to us because we are many we're millions and they are few they're very powerful for you. So what's the best bet for us to take back our future back in our hands and also to make sure that the political institutions of the United States, the great republic, will survive its 27th anniversary that will be celebrated next year. I think our powers are the same as they always were but we're not using them. So we have powers like striking we could have a general strike. Strikes, boycotts, we could all say look we're not going to use generative AI unless you solve some of these problems. Right now the people making generative AI are sticking the public with all of the costs cost to the information, ecosphere, all the enormous climate costs of these systems. They're just sticking everything to the public and we could say that's not cool. You know we would love to have AI but make it better. Make it so it's reliable. It's not destroying the environment, massacring the environment. And then we'll use your stuff right now. We'll boycott it. So we could say hey we're not going to do this anymore. We'll come back to your tools later. They're nice but I think we could live without them. They save some time and that's cool. But are you sure Gary? Let's be realistic. So I had pouring cold water in your concept of our hot resistance. But do you see it seems to think that people today, I mean, starts with students, will stop using chat deputies? I think it's very unlikely but the reality, the reality is that the students by adding to the revenue streams and user numbers massively, students are a huge part of it. They're adding to the valuations of the companies, they're giving companies power and what the companies are trying to do is to keep those students from ever getting jobs and the companies probably are going to succeed in that. The people who are losing their jobs first are students. The students graduating are entering this world where junior workers aren't getting hired as much and probably in part because of AI. In some ways they're the most screwed by all of this. And they have given birth to this monster because they drive the descriptions up. So open AI can raise all of this money because a lot of people are using it. A large fraction, I don't know the exact numbers, are students using them to write their term papers? If students just stop doing that, it would actually undermine open AI. It might lead to the whole thing collapsing and that would actually change with their employment prospects. I'm very skeptical about them. I'm skeptical about it too. So is it fair to say that regarding AI, short term, you are pessimistic, you have very uneasy feelings. Meet term, you're optimistic and long term, you're bullish. No, it's more agnostic. It's like, I think this could work out, but we have to get off of our asses if we want it to work out. We may reach some point where people in the US do play back. We have more of an expectation historically of having certain kinds of freedoms than I think the Russian people do. And so it could turn around and to the extent that it makes me an optimist to think it could turn around, yeah, but generally I like the metaphor that we're kind of on a nice edge and we have choice. It's important to realize that we still have choice. It's not all over yet. We still have some power to get ourselves on a positive AI track, but it is not the default. It is not where we're likely to go unless we really do stand up for our rights. So it's not the most optimistic forecast, but at least it's a call for action. But we could, we could take action. Exactly. We are America and we still could and we should. Our fate rests 100% on political well. Gary Marcos, thank you very much for this most enlightening conversation. Thank you so much for the conversation. This episode of Photocross in America was produced by Arlene O'Revallo, our editor is Dave Oresho, original music and mix Baroque smircy, fact checking by Ena Alvarado, special thanks to Polynica Sparrow and Migringo. Claudio Nebe is executive producer of Atlantic Audio. Andre O'Revallo is our manager editor. I am Gary Kasparrow. See you back here next week.