The World of Intelligence

Knowledge to understanding and how to get there - part two

32 min
Oct 21, 20256 months ago
Listen to Episode
Summary

Part two of this intelligence podcast explores how data tribalism in large organizations impedes integrated decision-making, and examines the critical balance between speed of decision-making and the development of human wisdom and judgment in an AI-augmented environment. The discussion emphasizes that successful AI implementation requires human-machine teams, functional expertise, and a fundamental shift in tradecraft to leverage technology while maintaining analytical rigor.

Insights
  • Data tribalism in bureaucracies (DoD, intelligence agencies) prevents integrated data access needed for optimal decision-making, even when all parties are theoretically on the same team
  • AI and LLMs are most effective when they free human analysts from data processing tasks, allowing minds to focus on higher-order analysis, judgment, and understanding rather than information gathering
  • Successful AI implementation requires functional experts and operators to be central to system design, not peripheral to tech-driven solutions parachuted in by external vendors
  • Trust in AI systems grows through practical exposure and understanding of both strengths and weaknesses, not through theoretical arguments or warnings
  • The speed advantage of AI-enabled decision-making creates strategic risk if organizational culture and legal frameworks don't empower tactical/operational leaders to act decisively on validated intelligence
Trends
Shift from human-centric to human-machine teaming as baseline operational model in defense and intelligenceTradecraft evolution: moving from data collection/synthesis to data curation and higher-order analysis as AI handles lower-layer tasksOrganizational culture lag: technology capability outpacing institutional willingness to decentralize decision authorityAI literacy becoming baseline professional competency requirement across defense, intelligence, and commercial sectorsIntegration of LLMs into routine decision-making across military targeting, analysis, and strategic planning workflowsGrowing recognition that adversaries (Russia, China) operate without Western legal/proportionality constraints, creating OODA loop disadvantageDemocratization of AI tools enabling broader organizational adoption and reducing dependency on specialized technical expertiseFocus on algorithm bias detection and source reliability assessment as machine-native capabilities rather than human-dependent processes
Topics
Data Tribalism in Large OrganizationsHuman-Machine Teaming in DefenseAI-Enabled Decision-Making Speed vs. WisdomTradecraft Evolution in AI EraTrust Building in AI SystemsOrganizational Culture and AI AdoptionFunctional Expertise in AI System DesignOODA Loop Advantage and Adversary AsymmetryRules of Engagement and Targeting Authority DecentralizationLarge Language Model Practical ApplicationsData Curation and Algorithm BiasIntelligence Analysis Workflow TransformationAI Literacy as Professional RequirementSynthetic Aperture Radar Image AnalysisStrategic Decision-Making Under Uncertainty
People
Harry Kemsley
Host of The World of Intelligence podcast, moderates discussion on AI, tradecraft, and defense intelligence
Mike
Guest expert discussing data tribalism, AI implementation, and human-machine teaming in defense/intelligence contexts
Sean
Co-host/analyst contributing perspectives on tradecraft evolution, targeting authority, and intelligence analysis tra...
Quotes
"Humans are not logical. Humans are not integrated. Humans are not machines. And so there's a natural tussle there that you have to work your way through."
MikeMid-episode
"You have to be a relationship builder before you need the relationship. That is so critical."
MikeMid-episode
"If you don't do that, you will fall behind in your job, in your analysis, and increasingly in your life. You will not be employable."
MikeClosing takeaway
"We've got to think about things differently. Where does the analyst come into the loop and how are they used to best effect to come up with the so what and the what if?"
SeanClosing takeaway
"By putting the mind on the pedestal, you're freeing the mind from all the clutter and the noise in the data. You're letting the mind rise to a point where you can actually understand and see things more clearly."
Harry KemsleyClosing remarks
Full Transcript
Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley. Hello, and welcome back. For those of you who listened to part one of this podcast, you'll know that we're about to pick up the second part now. Thank you for listening. I'm going to pivot us back into the question I was coming to, Mike, which is this data tribalism concept. So back to you on data tribalism. Yeah, so one of the unfortunate artifacts of humans is they're incredibly tribal, right? And anybody who's worked in a bureaucracy knows that. And so one of the issues with data tribalism is like if you have a large enterprise and you have lots of different types of data in there, and unit A has this whatever type of data and they use it all the time and they make decisions based on video graphics or video data or image data. There's another element in the enterprise that uses financial data. And when those two want to integrate that data, if we had manufacturing data and market data integrated in the same environment, then we could really cook with gas. We could sell stuff. We could sell it fast. We could sell it cheap. But unfortunately, just in the human world that we live, our tribalism gets in the way. And when you start talking about large bureaucracies like the Department of Defense or a large intelligence agency or what have you, this tribalism becomes a real drawback because now you have certainly have classification levels. Certainly, things are appropriately protected from wandering eyes. In many cases, the data that you need to really understand an environment is maybe not the data that you're familiar with or the data that you don't even know about. And so in a bureaucracy or a bureaucratically built data environment where there are shields and doors and locks and keys, then you won't gain the benefits of that integrated force data. And, you know, I think the magic of data applied in a military context is that you have the entire force with, you know, with a unity of command and a unity of understanding. And you always have sensors looking for things to change. But you have that idea that you want to prosecute. And today, the real challenge is that humans won't do that. Like, you know, the air guy will walk into the room and the ship guy will walk into the room and a ground guy will walk into the room. and like, well, you know, yeah, we could do that, but we're not going to do that because it would be better for us to do this. And the, you know, the air guy says, yeah, but you know, I, I got, you know, I need to have three jets on the, on the ramp. And so like, I'm not going to be able to do this or I don't know if I'm going to be able to get there. And that target, yeah, we looked at that yesterday. We didn't like it. You know, like, like when you're dealing, when you're dealing in the human environment, you can kind of negotiate your way to an operation. When you're dealing in a digital environment where somebody just says, no, actually, you are going to have to make that decision uninformed of what I know, then you're setting yourself up for failure, obviously. And you, at worst, at best, with data tribalism, if the Air Force has a sensor and it's really sensitive and they don't want other people to have that data, well, they won't let you use that data. Therefore, your integrated AI solution will not have that data. And if the Army has data about their material readiness of these ground forces, they don't want people to know that. And so like even in an environment of, you know, inside the Pentagon where everybody's on the same team, presumably, you still have these tribal boundaries that will not allow you, you know, that little bureaucrats all over the place will say, nah, I really don't want you to use that data. I'm not comfortable with that. And so then, you know, then you got to spend another month going up the ladder to try to find somebody who actually gets like what we're trying to accomplish and then put that into practice. This is, you know, I used to get frustrated by that. But again, human society and human tribalism is ripe in our society. It's what we are as humans, right? And so like you can't you can't squeeze that out of somebody, nor can you nor can you logic somebody out of something that they feel so strongly. And so those, you know, those human environments, you know, why is AI so hard? It's because of those humans. Right. Like that's that's that's what we're wrestling with. We're trying to build a logical, you know, integrated environment that's enabled by machines. Humans are not logical. Humans are not integrated. Humans are not machines. And so there's a natural, you know, like there's a natural tussle there that you have to work your way through. And the best way to do that is to build trust. Right. So, you know, you have to be a relationship builder before you need the relationship. Right. That is so critical. And, you know, it's true in business as well as it is in artificial intelligence or combat. You have to lean out and say, look, I want to extend a hand here to make this problem that we all have an element of. That's what leadership is all about. And so I think the leaders that are listening here, like, yeah, I'm not sure about AI. OK, well, get a good set of humans to help you understand that. And that is such, when you can balance a way a machine thinks and the way a human thinks, then you are going to be successful in this environment as this evolves because you need to be good at both. Yeah. One of the things you said there, Mike, that really struck me was the fact that humans are not logical. They don't like to be integrated. They're very tribal. And I think that is a real impediment to the idea that the AI would collect all of the available data from all of the necessary sources to come up with its view about what it needs to help the commander understand. I totally, totally agree with that. But what that probably does in the meantime, while we're trying to get everyone to the same level of trust, is it starts to push the decision making higher and higher and higher up the system to the point where the only place where everything's available, because they have sight of everything, in theory, would be the highest level of command, the furthest away from the tactical environment. And to use my war story from earlier, that can be very, very tricky. You should be in command or in control. There's a very famous British general said in a very, very good piece he did some time ago, in command and out of control. I would love to see the time when we trusted ourselves, let alone the AI, well enough to share the information we should share We done it in our history We have had times in our history when somebody had a piece of intelligence that they weren going to share with you but meant you had to do something And that meant you had to just trust that person either because of the rank on their shoulder, or because you actually did know them well enough. And I guess that brings us right back, does it not, to that need for constant trials, constant exercises, constant implementation practice to get to a place where we do understand it. Without it, we're never going to integrate the tribalism or indeed the machines. Sean, just before we step off this, we haven't used the T word at all yet in this conversation. Tradecraft, Mike, is what I'm referring to there. I think we've ever had podcasts where we didn't. Tradecraft, fundamentally, in Harry's words, is a combination of three things. Best practice in terms of process, great judgment driven by experience and the understanding you're getting from the environment you're in. And then increasingly these days, a good grasp of how to enable the first two with technology. So you're great process, great judgment, great technology. The combination of those three things is what we aim for in terms of great tradecraft. What I think I'm hearing, Sean, after that intro, is that tradecraft is still tradecraft. You still have to have it. But increasingly now, we need to understand more about the third part. Because the third part, the technology can help us to some degree with the first and the second parts. If I've got great AI working for me and I trust it, then I can rely more on it than I needed to before when I might have needed to have great process and great judgment. I can democratize my process to some degree because the AI is enabling me to do that. How comfortable are you feeling with that statement? Pretty comfortable, actually. And this is the exciting bit for me, because if we can start to trust the AI, how many times you use that word trust in this podcast and many others, actually. And, you know, it's accepted that we have a data lake from whatever source that is available to all. Then we can focus on what I think the important parts of the tradecraft are. You've heard me say the so what and the so if, what if before. But really doing that analysis from a check your assumptions, make sure you're working out right, cross-referring to, you know, what is the exam question? Am I answering the right question as opposed to just answering what I know about something? That releases the analyst who spends right now most of their time doing, as far as I can tell, still Excel spreadsheets and doing what I've just done in terms of all over the internet on various different sites, et cetera, trying to find stuff. Because if it's all there and you can trust it enough to go, right, okay, I've got my database now right what does it mean then then then the sort of the the nuances of tradecraft become even more i was gonna say easy it's never easy but but more accessible and you have more time to do them we've talked about saving time before which is a really key element isn't that the key point though that the technology is really just giving you the time to spend more time when you're sitting there running the g2 section mike and you've got all this data swimming around you you don't know what it's trying to tell you. It hasn't been organized, collated, summarized, and it's just a flood of data in front of you. Don't you spend the next N hours just trying to sift through it, trying to work out what the hell it's telling you when that's what the machine does for you in the first instance. It says, Mike, this is what I think this data means. This is what I'm starting to see as a pattern. And then you can step from that straight into your next part of your process. Isn't that what data is doing? Isn't that what the machines are doing for you? It's accelerating, enabling. It is. I think it's more than that, though, honestly, because the machines are suggesting, right? So I do think Sean's comment about tradecraft is really important. What does tradecraft become? And it's going to be different than what it was, right? Like if If sort of like the core data environment is a given and shared, then everything that now a human analyst can do is they don't have to worry about that because they're enabled by that core data environment. And that core data environment can be very dynamic, right? Like if you're hoovering up, you know, new services and, you know, things from other languages and all these other, you know, all these other things, then – and machines get really good at, like, helping you understand, like, ah, they – you know, this – we've seen this before, remember? You know, that kind of stuff. And machines can do that stuff now. Now you have – your tradecraft becomes much more about the data curation because the machines can do a lot of the analytical synthesis that the humans still – I don't think we'll ever be able to do, right? I mean you only have two eyes. You can only look at 10 screens at once. So like that, you're going to have to have machines to do some of that stuff. We can't be afraid of that. But so like I think that I think that's really important because because that level, that layer, that bottom layer of understanding and filtering data is is like kind of table stakes again. Like that's you have to kind of have that environment and you have to know if there's you know, if there's a bias in this, you know, this, you know, Taiwanese report or whatever, then you have to you know, you have to know your algorithms need to know that, too. Like, yeah, you know what, this source, whatever that is, has had a past of this kind of reliability or unreliability. All of that's possible in a machine environment, too. I think one of the really important points here with the data environment, I mean, it isn't augmented to what you have today already, right? Like, I think, you know, well, I'm not sure if I can trust this. Well, can you trust, you know, Skippy, who's reading that article downstairs in the, you know, in the basement of the intelligence agency? Can you? Like, does that guy know, you know, does he know the environment he's dealing with? You know, like, I think that, you know, certainly from a global perspective, if you're reading things across the globe, you have to understand context across the globe. If you read something that happens in Bolivia, it's different than something that happens, you know, in the Czech Republic. Right. And so like that, the cultural nuances that bias data are things that can readily be handled by a machine because a machine can tell you that. So I think that's that's really important. One of the, I think, probably most important, once again, you go back to the human relationships, but you also have to go back to how you build the machine environment. Because I think the way that I think at very senior leadership, they think, oh, well, we'll just bring some tech bros in, and they'll code up this decision process we have, and we'll be working by tomorrow afternoon, right? And you're like, oh, you poor child. Like, you know, you, you know, you know, my sweet summer child, you know, you have no idea the complexity of this, you know, of this ecosystem. And so what really has to happen, and I see this, I saw this all the time when I was the Jake director, you know, oh, well, these tech bros, you know, here they are. See their parachutes are coming in. They're going to have this all done by this afternoon. And like, dude, they know nothing about warfare. They know nothing about operations. They know nothing about the restraints and the constraints. So it is so important when you building artificial intelligence environments that real operators or real analysts are part of that conversation right Like they have to be the core of it. The tech bros can parachute in, but the tech bros have to ask, hey, Maureen, why are you doing that? And why are you doing it that way? And when you do that and you start to expand that, you build it from the bottom up. The tech bros can help build it. They're great. They make a lot of money. And, you know, so they want to do this, but it's so important that functional expertise is the primary capability that we're exploiting. Functional expertise. Hey, you're a pilot. You know something about flying, right? Let's talk about that, right? And let's get the machine to understand that environment. And, you know, just to extend that just a little bit, I mean, the machines are so capable now. So I'm here. Let me give you a great example. So I'm a Marine. Right. I don't know what an accent wall is. I don't know how to decorate a house. I don't know any of that stuff. But my wife said, hey, I want an accent wall. So I said, OK, sure. So, you know, I whipped out an LLM and, you know, and I took a picture and and the machine. And I said, hey, what do I do here? And the machine now think about this, like the machines looking at this and saying, Well, you know, given that pillow there and that picture on the wall, you know, you probably want to lean toward this and you want to have this kind of texture. And if you add this, you know, this chair rail, it'll match that picture or that stained glass window or whatever. You know, like, holy cow, you know, machines can do all of that stuff now way better than this human anyway. And so, like, let's take advantage of that, right? Let's build the algorithms that say, yeah, you know, in this environment, in this country, that's happened three times. And each time, you know, it was as a response to this. It's the same flow, right? It's the same muscle movement, you know, in the image environment, in the language environment. Can you actually code that in a way that makes sense? Can you curate that over the long term because things change? and how do you do that? That is so critical. Well, I'm going to come to you in just a second, Sean. I want to start bringing this human element to a close to conclude this podcast. But I have to ask, Mike, when you presented your plans for the accent wall to your wife, did she declare who are you and what have you done with my husband? Yeah, exactly. Like I said, an accent wall, I don't know. What does that mean? By the way, shortly after recording this podcast, I'm going to go and find out what an accent wall is, probably by checking with an LLM somewhere. Absolutely. Use an LLM and you'll stay out of trouble. Yeah, it'll give me a good answer, no doubt. Let's start bringing this together at the back end. I said at the beginning of the podcast, we would talk about how this all starts to come to an important crossroads. If we're going to get inside the decision-making cycle of our adversary, it almost always means we're going to make decisions better and faster than they do, right? And there's an implication there for accuracy and speed. Well, doesn't that run against the equally important, in my opinion, equally important need to build up wisdom, judgment, and understanding? And those in the human environment, anyway, take time. So what I'm looking for here, Sean, is your initial thoughts on how do we balance this? How do we balance this need for, in quotes, instant situational learning or understanding against the need to build up wisdom and judgment? How do we do that in the modern era? I know before you even start to answer, that's an impossible question to answer. Yeah, no, I appreciate that. But that's where the two things come together for me in this conversation. It is, and it's a really difficult one. particularly if you take it in the strategic context, the more information somebody has, doesn't necessarily the more wisdom they have, but what it does mean is the more they want to make decisions. We've already seen a world in the security environment, and as you know, I'm a target here by background, where we would choose target against the designated intent that we were trying to, the aim that we're trying to leave. Yeah, but more than that, okay, this is what we're trying to achieve, therefore derive targets from that. We'd have rules of engagement, We'd have a targeting directive. It was all there. So we knew about legality, proportionality, et cetera, et cetera. I'd have a lawyer literally sat next to me in the targeting board, as well as all my analysts. And yet on 99% of the occasions, I would still have to put a target all the way up to the very senior levels in Whitehall to say, can we hit this target? Absolutely ridiculous. Now, I'd like to think that things have developed significantly since then, although I'm not entirely convinced. But at what stage does that trust say, right, now go for it. You've got the guidance. You've got the authorities. Now just go for it. And this is in all sorts of areas. So there's a cultural issue, but there's also a legal issue. We are so in the West constrained by our own legalities. No, that's a very bad way of saying it. Of course, we should be constrained by legalities, but we should understand it enough to know the difference between risk, proportionality, all these other things, so we can just do things. We don't. And we're struggling to get there. You look at the adversary, though, which is really the important thing, they don't care about that. You look at the way that large parts of Ukraine are being rubbleized by Russia and the way that China are acting in certain areas, they don't have the same understanding or the same care that we take. Now, the problem with that is, regardless of whether use AI effectively or not, is that you are getting outside of your own OODA loop if there is such a thing, and the enemy can react first. So you might have the best information earlier, but if you're not prepared to act on it and trust it, then you're in a bad place. That's the concern that I have right now that I don't think we're there yet. Right. So this decision-making moment, let me just give you a counterpoint. I remember a situation where looking at synthetic aperture radar images, which to my eye looked like raw sharks inkblots. I could have seen a pig, a cat, a dog with a hat on. I didn't know what I was looking at. The decision about whether this was a legitimate target ultimately, in my opinion, on that day came down to the young tech sergeant who was staring at it and saying, yep, that's definitely the target we need. I couldn't interpret the image at all. And by the way, even if that image had been set up to Whitehall or even beyond, it would not have been understood unless they were SAR imagery analysts. And so the counterpoint would be, where does the decision actually get made? Sometimes it can be an ultra tactical position or it can be an ultra strategic one. But isn't that where the balance needs to be struck? With this mind on a pedestal, they now understand what they need to understand. They can make a decision that's going to be decisive and effective getting inside the oodolip of the adversary, but they just can't take it. They just can't take that decision. Yeah, exactly. Exactly. In a human machine team, humans and machines both have their role. And you cannot get by with only one of those, right? I mean, humans can do a lot by themselves. And machines can do a lot too. But what we're talking about here is now optimizing that relationship between humans and machines. And so you have to think of it that way, right? If you look at artificial intelligence at arm's length, and, well, I heard that it was bad, or I heard that it sometimes misspelled a word or what have you. If that where you stuck then you need to unstuck yourself right You need to keep learning until you really do understand the advantages and risks of artificial intelligence solutions and the right way to interact with those systems and how you interact with the outcomes Understand the way to do it. Understand the risks that you're taking. understand the nuances of your application environment so that you know if something's wrong or it's not wrong, right? Like that is really important. And I think more and more people are on that journey and they're doing well. The fact that you can operate a large language model on your phone, wherever you are, seven days a week, that I think has really helped because people are starting to see the accent wall use cases or which car to buy use cases or all of the other things that are very mundane and very practical. But you need help with those decisions if you want to optimize them. Optimizing decisions is not only for military environments, right? It's just that they're really important in those environments because the consequences are so grave. But it's coming from those non-critical environments and the use of AI and large language models the people are starting to become more comfortable, I think, to your point earlier, that trust and understanding of the strengths and weaknesses is growing. And I think, by the way, Sean, you said earlier, we've used the word trust a great deal. We've also used the word understanding in this conversation a great deal as well. The two are very closely linked, are they not? In order to trust something, you have to get to a degree of understanding. And that understanding, I think, is growing by the fact that, as you say, I can walk around with my telephone in my pocket, at my cell phone and I can actually punch a few keys and I get an incredible answer on a very complicated topic very fast by using a large language model, which is helping me understand these strengths and weaknesses. Gents, I'm conscious that we're an hour in and we probably spend the next three hours going through the next couple of topics, but I'm going to pull stumps, a reference to a game of cricket in the UK to the umpire says stop. We'll pause there just because I think this is a conversation that we should take further. And we don't have time today to do that. So let me pause the conversation by first of all, saying, Mike, thank you for giving up your time and your experience and expertise in this conversation. Very, very grateful for doing all of that for us. But before I let you go, one further thing, which we always do at the end of the podcast, and that is the one takeaway. If you wanted the audience to know one thing out of this whole conversation thus far, what would it be? I know that Sean's been probably scribbling down a couple of ideas on the way through. And Sean, I'll let you go before me today, which is unusual because I normally let him go last. Mike, if you had one thing you wanted the audience to know from this conversation, what would it be? If there was one thing that I would really wish for everybody is that you take the time to understand the new tools that are available to you. Because honestly, if you don't do that, if you don't do that, you will fall behind in your job, in your analysis, and increasingly in your life, right? As these tools become integrated in all of our human and machine environments. And so operating in a commercial environment or operating outside of your job environment even, it's going to be necessary for you to be literate in these things. Do not say, well, I'm not sure about this AI. I heard it's dangerous. No. Go find out yourself. Use it. See where it works. See where it doesn't work. Imagine utilization artifacts or opportunities in your house, in the way that you do business. This is the transformational technology. And when I say transformational technology, it means transform, means the form changes, right? We are in a transformational age. If you aren't changing your form, then you are not going to be ready for tomorrow. You will not be employable. You will not be able to run your household. You will not be able to do these things, right? So it is critical that you understand this transformational technology because it, in turn, transforms our society. It transforms the way we deal with data. It transforms the way we fight. if you don't understand these things you are out of the of the conversation not a place you want to be transformation evolution by another word and all the more important these days as this transformation is happening so fast evolutions used to take millennia these days the evolution of human culture might be measured in hours days in some some regards sean your takeaway please yeah mine nests into this perfectly actually and it's you know we have to come to a time where we are comfortable with llms and algorithms and they're just part of normal how we do business but i don't know if enough thought has been put into serious thought into okay how does that um how does that change how we think about how we do intelligence so if you want our tradecraft you know what does 20 what does tradecraft 2030 look like because it's going to have to change and it should do it and if we get it right it will actually you know improve everything so much that We'd be able to come up with really good foresight, really good analysis, all the rest of it. But I don't know how much work is going into that. You know, it can't be just about, OK, well, the AI is over there. Right. We've got all this now. OK, we'll just do the normal business like we've always done. You know, we'll cross refer, we'll use spreadsheets, et cetera, et cetera. We've got to think about things differently. Where does the analyst come into the loop and how are they used to best effect to come up with the so what and the what if that is needed by the commander? So I think that's a piece of work that really needs doing. Thank you, Sean. For me, it would be something that was said quite early on by yourself, Mike, in terms of by putting the mind on the pedestal, you're freeing the mind from all the clutter and the noise in the data. You're letting the mind rise to a point where you can actually begin to actually understand and see things more clearly. That, for me, is a vision that caught my attention, as you said it, because if that's what that means, given that we do not lack data, it's everywhere, it's surrounding us, it's swarming around us, the ability to rise above it, find that degree of calm, clear vision of what's going on to understand and then make decisions, that, to me, makes it all the more important that we pursue this mind on a pedestal. And on that, I'll finish by saying thank you one more time, Mike, for your time today. Thank you. If there is nothing else from either of you, I'm just pausing. Let me say thank you to the audience for coming through this journey with us over the last couple of episodes. It has been a great conversation, Mike. I'm going to say thank you for the third time. Really grateful for your time. If the audience has any questions or anything they'd like us to explore still further, there is now an email you can use with the show notes where you can send your request in. Not a lot of people are doing so, and we will happily attend to those in future podcasts as and where we can. With that, nothing else to say other than thank you for your time. Bye-bye. Thanks for joining us this week on The World of Intelligence. Make sure to visit our website, janes.com slash podcast, where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts, so you'll never miss an episode. сделал this episode?