So no matter how you cut it, it's really hard to notice something that hasn't happened. Hey everybody, Todd Conklin, Pre-Accident Investigation Podcast. It is time once again for us to hang out and chit and chat, chitty chat as it were, about all the things that matter and all the things that are good. How's your year going? Are you practicing gratitude? Are you thinking about beginning with success in mind? All the things we've talked about so far this year, which is, you know, it's been kind of a chatty year. Hey, sorry about the technical difficulties for last week's podcast. Little bit of computer glitchiness there, but they fixed it, so it wasn't a problem. But I wouldn't have noticed it. Luckily, several of you reminded me that it had gone sour. And because it had gone sour, I was able to contact him and fix it. But anyway, that's one of the things. As technology advances, we start to introduce more ways for a system to fail that did not exist in the past. You know this, right? I mean, that's pretty common. And so the more we automate, and I'll just be really honest with you, the podcast industry has become quite automated. There's lots of things that I used to have to do 10 years ago that I don't really have to do very much now, which is kind of, well, it's pleasant. It saves time and money and energy and effort. I don't know if it saves money. I just made that up. The money part completely made up. But it does introduce new ways for systems to fail. And what's interesting is my practice is you do the podcast, then you listen to it, but I don't listen to it at normal speed because why? I mean, I've just had to suffer through it. So I speed it up and listen to it. And you just make sure all the transitions and stuff are good. That's kind of what you do. And then you submit it to the podcasting host. What a name. And they take it from there. and they put it on Spotify and Apple and all the stuff that the podcast services go to. And so you do kind of a quality check. Maybe I wouldn't call it, I would call it sort of a verification and validation that it recorded. So you got that going and then you submit it to the universe and it takes over from there. That's the part that's super automated. That's the part that screwed up. Okay, I don't really need to make more excuses on that. We are in an interesting January here in New Mexico. We finally got some snow, but not very much. Don't get too excited. Although I bet it did really help the ski industry people because we had kind of none. In fact, Mark Yeston said the other day, he was asked why he wasn't up ski patrolling. And he said the reason he's not skiing is the same reason he doesn't ski in June, which is actually a super funny thing for mark to say, because there was just no snow. So, but we finally got some snow, but then almost, you know, two days later, it was mid fifties centigrade, no Fahrenheit, sorry. That would have been pretty hot though. That would have been super hot Fahrenheit and the snow kind of went away. But other than that, life's grand, I think. I mean, not a lot of complaints. Good stuff coming up. We've got kind of a special project. I don't think I'm supposed to talk about it yet, but we were going to do a meeting into this month in Vancouver with Redonda Vogt, the nurse at Emory who had the medical error. She was on the podcast, but there was some technical difficulties. I think I might be speaking out of school, but I think getting her in and out of Canada, which then delayed the meeting or kind of moved the meeting. So we're talking about actually doing another one. And I think it's going to be kind of late March, but that's top secret. So don't blab it out because I haven't actually fully officially heard if that's happening again or not. But if you get a chance, I was really looking forward to just sitting down with her, Donda, and chatting with her about the story she told on the podcast. Remember that podcast it was hard to hear because she was coming in by telephone But that kind of in that sort of in the works If you interested in that I would sure it would be great to know because I think there's much discussion. And quite honestly, I don't know about you guys, but I think there's much for us to learn. Like, I feel like that is a target-rich environment for learning. And Redonda's story is really interesting. And what I was kind of thinking of doing before this all got kind of got technically complicated is in real time, kind of charting out using some kind of, I would use like an expanded causal factors chart in real time, sort of have her tell the story and chart it kind of like we would do if we were doing an investigation. I don't know. I thought that'd be kind of an interesting way to spend the day, and then we could sort of dig in from there. But that's on the quiet. I don't know if that's even happening, but it's certainly on my mind, and I'd really like to have that opportunity. And I kind of understand why it might have been hard to do that meeting in Vancouver. But nonetheless, that's kind of the lowdown blowdown of what's happening. So let's talk a little, because I teased it earlier, but I want to talk to you about a phenomenon that I've been thinking about a lot lately. And that's the idea that it's really hard to measure something that doesn't happen. Or as I said in the introduction to the podcast, it's hard to notice something that hasn't happened yet. And I think we should talk about the fact that we're in a really interesting pickle. Now, I don't say that we have the worst job in the world because I think worst job in the world is the port-a-potty guy at a chili cook-off. That's the worst job in the world. But one of the complexities in our job is that if we do our job really well, nothing happens, right? I mean, that's just a part of it. And we've struggled with this as an industry, as a discipline for many, many, many years, which is probably part of the reason why our metrics tend to measure the things we don't want as opposed to measuring the things we do want. So one of the golden rules of creating metrics, especially operational metrics, is you really want to measure what you want to see. Because the way incentives work, if you measure what you want to see, you're incentivizing more of what you want and therefore getting less of what you don't want. Makes sense, right? I mean, this is all kind of basic human nature. But the challenge has been is that if we do our job really well, the answer is nothing bad happens. Even though we know and we've had thousands of conversations around the fact that when nothing bad's happening doesn't mean nothing bad happened, but it means that the consequence of a potential event didn't pay out, didn't succeed. And so we're in this really interesting place where we have what they would call in algebra, the absence of a null set. So every set that's, you know, a group of numbers, there's a null set, which is not a group of numbers. Did that make sense? Like, because nothing happens, there's nothing to measure. Because there's nothing to measure, there's nothing in that set. And so we have this really interesting challenge where we've been, because of the work we do, we've been sort of forced to talk about and show progress. We want to get better and our companies want us to get better. Organizations want to improve. But one of the ways we can talk about that improvement is by understanding that every time something doesn't happen, that's actually a positive indicator towards improvement. So I used to tease my boss back in the olden days, back in the laboratory, that perhaps maybe I should just produce a report every day of all the people who didn't die. Right? I mean, that would be an interesting report. It'd be a big report at Los Alamos. It would have been a lot of pages. And he would always kind of wink at me and he got it But the bottom line is that not a terribly practical report to produce And I not sure it has a ton of value but it does help illustrate that what we want to have happening is happening. And it allows us a way to sort of measure what we want to have happening as opposed to a report that talks about the injuries for the month or the lost time or a significant event or injury, a high potential injury. This is something that's actually relatively easy to measure. What it's done for us as an industry is really made the conversation very hard to have as we improve. So I'll be the first to admit, and I think probably you would be with me on this, that traditionally how we've measured safety, the absence of accidents, is we've measured the number of people we've hurt to determine how good we are at managing high risk, highly complex work in a rapidly changing environment. And say what you want to, that's a terrible metric. It's just not a good way to measure it. However, and there's a big fat however there, it's actually probably been pretty effective in emphasizing the importance of the work, and it's a pretty noteworthy stopping point on a management dashboard. The problem is, and it's always been, that we want those numbers to be predictive. So if the numbers go up, we can say, we're in trouble, we need to do more, something bad is going to happen. When the numbers go down, we can say, we're doing great. We can sort of knock our back a notch or two. We can send out the resources and do less because we finally got a handle on injuries. Well, so pretty much everything I just said is not true. There is no real connection between frequency, number of events, and severity, consequence of events. That connection simply doesn't exist. And in fact, the connections exist in a much different way. We know that, and Matthew Hollowell and the gang and CU is kind of helping us understand this, we know when there's lots of energy that the chance of a catastrophic loss is way more connected to the control of the energy than it is the frequency of the event. So you could have one thing happen one time, and if there were no controls on it, it could be catastrophic and never happen again and never happen before. And that's the unique challenge with events is that for the most part, every event that happens in your organization has never happened before and will never happen again. Not in a detailed way, the context-rich conditions that cause the event to happen. And so we have this desperate need to predict. We've always had this. Everybody has it. That's kind of normal. We have a desperate need to measure because our companies are engineering centric by nature and they want hard, clear metrics by which they can manage the future. Because if you can't measure it, you can't manage it. That's what they say all the time. And yet we have a lack of a null set. We don't really have anything to measure because when nothing bad happens, there's nothing bad to measure. So what we've traditionally done under the old guise of measure what you want, not what you don't want, is look for ways that we could measure what it is we want. So the presence of safeguards or the efficacy or in place and effectiveness of safeguards. We audit systems. We do validations and verifications of controls. And those are much richer, much more active metrics that actually incentivize the organizational outcomes that we desire. But man, is that hard to do. It's really easy to measure how many people cut their hands because there's a couple. So you got two every, let's make it up, two every six months. So that's really measurable. and you can make a little chart that says you know we had two last year and one so far this year so we 50 improved It really hard to measure things that are always present that are always in the system, that we hope will always work, but we should be validating and verifying these things at all times. We should be taking the effort to measure the things we want, which kind of leads us to the whole question around predictive data, or actually what they call it in the safety world is leading data. We want leading data. And yeah, you're damn right we want leading data. Who doesn't want leading data? I want to know what the future holds. I'm super interested in what the future holds. The problem is, is that near as I can tell, leading data also suffers from the absence of nothing bad happening. Because when nothing happens, there's nothing to measure. This gets even crazier when you look at events and you said, well, the workers should have seen, the workers should have known, the workers should have known that that system was energized. The workers should have known that that lockout tagout procedure, that isolation procedure is only doable with the left hand. Well, it's really hard to notice things that haven't happened yet. In fact, I would suggest as a human being, we're sort of finely trained to not pay attention to crap that doesn't matter. There's so much stuff that does matter. Holy crap, you know this because you're living in the world right now that we are in a position where we can't really spend a lot of time on things that don't matter. And so therefore we don't. And if something hasn't happened yet, well, it hasn't happened. It's hard to notice something that hasn't happened. So where does this discussion take us today? Because this is, I mean, I even wrote notes down, you guys. That's how serious I am about this discussion. I think it takes us to three places. One is the realization that it really is very difficult to find a set of metrics that measure things that have not happened. math is just not set up to math that way it just doesn't work secondly we really should as aggressively as we can focus on metrics that actually create desired outcomes not metrics that measure non-desirable events and thirdly it's very important for us to realize that just because nothing is happening doesn't mean nothing has happened. So where do we take all this? So what do we do with this, he says, by, you know, listening to a podcast of problems that don't have solutions, which is kind of what this feels like. Well, I'm not sure where we need to go with this. I mean, great people need to think about this problem more than I have because I'm not a great thinker. I'm just kind of one banana peel ahead of the rest of the group right now. But I do know that the challenge is that if we think the system is stable, then the system no longer requires our attention. and we know we get into trouble when over time we don't validate and verify the presence of that margin or of that recoverability or that rescuability or controls or safeguards. We know that's a problem and we know that's built into our system. How we handle it, well, that's probably a topic that we should focus on this year in the podcast because I do think this is a really valuable topic. I think it has lots of legs. And I think for the most part, people don't talk about it very much because the old metrics are really ingrained. They're hard to change. And the new ideas are hard to have. That's the podcast. It's a shorty, but that's all right. It's January. We can take it a little easy in January. This is hibernation time for us. So I'm trying to really focus on the hibernation part. Learn something new every single day. Bet you did today. Have as much fun as you possibly can. Be good to each other, be kind to each other, and for goodness sakes, you guys, be safe.