Welcome to the AI Brief from the YPO Technology Network. I'm Stephen Forte. Look, some days the AI news cycle hands us three things worth your attention. Some days it doesn't. When it doesn't, we're not going to waste your time reading press releases out loud. We pivot. And today we pivot to a problem every single CEO I talk to is dealing with. The senior guy who knows everything. The one who's been there forever. the one who's about to walk out the door and take 15 years of judgment with him, the old salty guy, and how AI in 2026 for the first time lets you build a mini-me of him before he leaves. This is not a vendor pitch. This is a framework. Let me set the scene. You run a 220-person specialty manufacturer outside Cleveland. Your shop foreman is 64. He's been with you since 1998. He can tell by ear which bearing is about to go. He knows which customer will never pay net 30 and why. He remembers the change order from 2011 that reversed a prior engineering decision. And he knows why, because he was the one who pushed for it. None of that is written down. It was never written down. He's retiring in nine months. You have three options. You can try to document everything which never works more on that in a minute. You can hope his successor picks it up by osmosis, which takes 15 years you don't have. Or you can do what a handful of Fortune 500 companies started doing two decades ago, what a startup called Udia just did for Duracell last month, and what you, a mid-market CEO, can now afford for the first time. You can build the mini-me. Here is the thesis I want to apply to every beat of this episode. The goal is not to replace the expert. The goal is to never lose the conversation. Today, we are going to read it that way. Let me give you the image that makes this stick. Five guys around a campfire passing stories. That is how tribal knowledge has worked forever. Cave painters, master carpenters, training apprentices, the plant foreman walking a new engineer through why we do it this way and not that way. The campfire is beautiful. The campfire is also the problem. It does not scale. Only five people can gather around one fire. It does not persist. The stories live as long as the storytellers. And when the last storyteller walks away, the fire goes out. The campfire does not scale. The campfire goes out. That is the problem every CEO listening has, whether you've named it or not. Before I give you the framework, let me put some numbers on why this matters. 10,000 Americans turn 65 every single day right now. This is the demographic spine of manufacturing, construction, energy, and the trades. The people who know how the plant actually runs, not how the org chart says it runs. IDC estimated Fortune 500 companies lose $31.5 billion a year from knowledge that simply was not shared. For a mid-market company, assume low seven figures annually per critical expert who walks out without a handoff. And here is the case study that should focus every CEO's attention. In the late 1990s, Boeing offered an aggressive early retirement incentive. A lot of senior engineers took it. Boeing later booked a $1.6 billion charge tied directly to that exodus. Analysts have since traced parts of the 737 MAX MKS software failure to the loss of senior engineers who understood the original control system reasoning That is the real cost of letting the campfire go out It is not just productivity It is a plane Now, here is the counterargument, the one I want to get out of the way. You might be thinking we already have SharePoint. We already have Confluence. We already have a wiki. We tried. Nothing stuck. You are right. 60 to 70 percent of traditional knowledge management systems fail to deliver meaningful return on investment. NASA did an internal survey on offboarding. Only 14 percent of departing employees felt their knowledge had been successfully captured. 86 percent said no. Why do wikis fail? Because of something Professor Ruth Clark, one of the leading researchers on expert cognition, calls the 70% rule. When an expert tries to explain what they do, they naturally omit roughly 70% of the critical information they actually use. They have forgotten what they know. It has become automatic. You can give them a blank Word document on a Friday afternoon, and they will hand you back the 30% they can still articulate. The other 70%, the judgment, the pattern recognition, the reasoning, that part does not come out on command. Michael Polanyi, writing in 1966, gave us the line that defines the entire problem, we can know more than we can tell. 60 years later, that is the sentence AI finally does something about. What has changed in 2026 is not the desire to capture this knowledge. The desire has always been there. What has changed is that large language models can now ingest 15 years of an executive's email, cross-reference it against their change orders, their incident reports, their customer correspondence, and generate the interview questions the expert never thought they needed to answer. The AI can find the 70% the expert cannot articulate by finding the gaps between what happened in the documents and why, which only lives in their head. That is the breakthrough. And it is why I am going to give you a framework today called the Campfire Protocol. Seven phases. Each one earns its place. Each one has a case study behind it and a thing that goes wrong if you skip it. Phase one, consent. Before a single recording happens, the retiring expert signs on as a co-architect, not a subject. They approve the scope. They retain the right to flag outputs they consider wrong or damaging. The contract language matters. it must say the employee hereby assigns in the present tense, not agrees to assign. There is case law on this going back to 2008, DDB Technologies vs. MLB Advanced Media, and it still holds. You need a separate persona rights license because California's AB 2602, effective January 2025, makes any digital replica contract unenforceable unless the uses are specifically described in the individual head legal representation. You need recording consent in the 12 all-party consent states. And critically, you need the expert to want this. If they feel cloned and fired, they withhold the 70%. You get nothing. Make them the co-architect, not the subject. Phase two, corpus. Before you interview anyone, you ingest everything. 15 years of email is typically 300,000 to 560,000 messages for a senior executive. About 15% to 25% of that carries real decision signal. meeting recordings if they exist, change orders, incident reports, RFP responses, technical drawings photos from field visits the dead archive Microsoft Presidio open source runs on your own hardware redacts PII across 50 plus entity types in 48 languages Assembly AI transcribes audio at 90 cents an hour with speaker labels. The tools are production ready. The work is mostly pipeline engineering. Phase three, discovery. This is the one nobody anticipates, and it is the one that fixes the 70% problem. You let the AI read the entire corpus and generate 500 to 2,000 candidate interview questions. Humans triage to the top 100 to 300 for live sessions. The AI finds the unresolved problems, the unexplained decisions, the reversed engineering choices, the incidents that show up in the record but have no narrative attached. These are the questions the expert did not know they needed to answer. In March of 2018, there was a change order on Project Titan that reversed a prior engineering decision. Why? That is a discovery question. The expert can answer it. They just could not have generated it. Phase four, interview 45 to 60 minute sessions, two to three times a week, eight to 12 weeks, not free form chat. You use a method called the critical decision method developed by the psychologist Gary Klein in 1989. CDM probes for the reasoning, not the answer. It captures the alternatives the expert rejected, not just the one they chose. CDM has documented 81 to 100 percent reliability in capturing expert decision patterns. That framework was invented 35 years ago for Air Force pilots and firefighters. In 2026, at scales because an LLM can ask the questions, follow the contradictions, and transcribe, label, and embed everything in real time. Phase five, shadow. This is the phase I want every CEO listening to remember. The bot goes live three to six months before the expert's actual retirement date to a small internal audience. Every query is logged. Every low confidence answer, every wrong answer. Every time someone says the bot got this completely wrong, it gets flagged automatically. And it goes back to the expert in the next week's session. This is the feedback loop. It fixes the hallucination problem. It also fixes the cultural problem. The team learns to trust the bot while the expert is still there to correct it in real time. When the expert retires, the bot already has organizational credibility. Skip this phase and you get IBM Watson at MD Anderson. 62 million dollars spent, zero clinical deployment. A cautionary tale we do not have to repeat. Phase six, handoff. On the expert's last day, the bot is mature enough to handle 70 to 85 percent of routine queries. The rest route to a named human successor, the retired expert optionally stays on retainer one or two days a month for the first year to handle edge cases and approve model updates. This is not a cliff. It is a handoff. NASA has been doing this since 2007 with their Spacesuit Knowledge Capture Program, Apollo-era engineers being interviewed while they were still reachable. That program is still running in 2026. 165 events, 90 released publicly, a handoff that lasted a generation. Phase 7, stewardship. Tamed ownership, quarterly drift and the uncomfortable question every CEO has to answer eventually, who's next? The first mini is the pilot The real program starts when the second third and fourth critical experts go through the protocol in parallel And here is the stewardship rule that keeps you out of court. The bot is a knowledge tool, not a management tool, never used for hiring, firing, evaluation, or promotion. Mubbly vs. Workday was certified as a class action in 2025. iTutor Group paid $365,000 to the EEOC for AI-driven age discrimination. The first time a manager cites the bot in a performance review, the program is legally exposed. That is the Campfire Protocol. Seven phases, consent, corpus, discovery, interview, shadow, handoff, stewardship. Let me show you what good looks like. Royal Dutch Shell has been running a program called ROC, Retention of Critical Knowledge since the mid-1990s. structured retirement interviews, feeding a knowledge hub backed by communities of practice, audited savings of $300 to $400 million a year. No AI, pure human discipline. That is the floor. Now, the ceiling in April 2026, a startup called Udea built something called the Duracell Brain for Procter & Gamble's battery division. They deployed what they call expert digital twins, LLMs trained on specific senior lawyers' contracts, playbooks, and reasoning patterns. The result, Duracell's head of legal, Gary Hood, reports a 50% reduction in time and cost of contracting. Graybar, 98% faster legal due diligence. Coherent, 78% faster contract review. Across 50 enterprise customers, $3.4 billion of commercial opportunity processed. That is not theoretical. That is March 2026. And here is the part most CEOs will find surprising. You can afford this. For a mid-market deployment, call it 50 seats on a knowledge platform, meeting capture, LLM licenses, vector storage, you are looking at $18,000 to $24,000 a year recurring. The one-time build, Pipeline Engineering, PII Redaction, a boutique knowledge elicitation consultant to run the interviews, $70,000 to $175,000. Some of you listening spend more than that on your holiday party. Give UD a credit for what they built. They took a framework that worked at Shell for 30 years on human discipline alone and compressed it into software that runs in months, not decades. That is a rare thing. It deserves to be said. So let me bring this home, walk into your office tomorrow, look at the org chart, find the name that makes you wince a little when you think about them leaving. The person everyone calls when the thing breaks, the person whose inbox has the answer that is in nobody else's inbox, that person is your campfire. You are not going to replace them. That was never the goal. The goal is to never lose the conversation. You are not cloning a person. You are keeping the fire. The Boeing lesson is not that knowledge transfer is expensive. It is that the absence of it is more expensive. In money, in judgment, in some cases in human lives. You have the tools, you have the framework, you have the demographics telling you the clock is running. What you do not have, if you wait, is the expert. Start the campfire protocol before you need it, not after. That is the YPO Tech Network AI Brief for Thursday, April 23rd. I am Stephen Forte. If this was useful, send it to a fellow member who has an old salty guy of their own. I will be back Friday with more. Until then, stay sharp.