The AI Daily Brief: Artificial Intelligence News and Analysis

How to Build a Personal Context Portfolio and MCP Server

25 min
Apr 3, 202616 days ago
Listen to Episode
Summary

This episode introduces the concept of a personal context portfolio—a structured, portable collection of markdown files that serves as machine-readable documentation of who you are, your projects, roles, and preferences. The host walks through building this portfolio to eliminate the 'context repetition tax' of re-explaining yourself to new AI agents, and demonstrates how to deploy it as an MCP server for universal access across AI tools.

Insights
  • Context is the critical differentiator between leading and lagging organizations in AI adoption; most enterprises lack structured context for their AI systems, leading to poor agent performance.
  • Personal context portability solves vendor lock-in by creating a single source of truth about yourself that any AI system can consume, reducing friction when switching between tools.
  • The 'context repetition tax' degrades quality over time—incomplete context transfers mean agents operate with degraded information, not just wasted time.
  • Markdown-first, modular design enables living documentation that evolves with your work while remaining universally compatible across all AI platforms.
  • AI can be your tutor for technical implementation; iterative collaboration with Claude/ChatGPT for troubleshooting and step-by-step guidance is more effective than trying to learn independently.
Trends
Enterprise AI adoption is shifting from tool deployment to operating model transformation, requiring organizational context infrastructure.Personal AI context management is becoming critical infrastructure as users interact with 5+ agents weekly; portability will become table stakes.MCP (Model Context Protocol) servers are emerging as the standard for making personal/organizational context universally accessible to AI systems.Markdown is solidifying as the universal interchange format for AI context across all platforms and vendors.AI-assisted self-documentation and portfolio building is replacing manual knowledge capture as the primary method for context creation.Context hub initiatives (like Andrew Ng's for coding agents) signal that context sharing between agents will become a competitive advantage.Notion and similar platforms are positioning enterprise context as a core moat for AI-native organizations.Decision logging and historical reasoning are emerging as undervalued but critical context for AI agents making recommendations.Communication style documentation is becoming essential metadata for ensuring AI outputs feel authentic and aligned with user preferences.The 'agentech era' requires individuals and organizations to treat context management as a core operational discipline, not an afterthought.
Companies
Applied Compute
Michael Chen published research on enterprise AI deployment challenges and the gap between having data and having AI-...
Notion
Announced database agents as a solution for enterprise context management, positioning their platform as a context hu...
Anthropic
Claude was discussed as a primary AI platform for building context portfolios and MCP servers; mentioned in context o...
OpenAI
ChatGPT discussed as alternative AI platform for context portfolio building; mentioned in context of DoD deal announc...
Google
Gemini mentioned as one of several AI platforms that could consume a portable personal context portfolio.
xAI
Grok mentioned as an alternative AI platform that could benefit from access to a personal context portfolio.
GitHub
Used as the hosting platform for the personal context portfolio template repository and for deploying MCP servers.
Railway
Deployment platform used to host the remote MCP server version of the personal context portfolio.
People
Michael Chen
Published article on enterprise AI deployment challenges, highlighting the gap between having data and AI-ready data.
Andrew Ng
Discussed Context Hub initiative for coding agents to share learnings and documentation feedback with each other.
Quotes
"Data ready is just a state of mind. The gap between we have data and we have data in a format that an AI system can learn from is enormous."
Michael Chen, Applied Compute
"In a world of agents, everything is about context."
Host
"The context repetition tax doesn't just waste time, it also degrades quality."
Host
"A personal context portfolio is API documentation, but for you—a single source of machine readable truth about who you are."
Host
"AI is zero judgment. There is no risk of you looking or seeming dumb, because there's no one on the other end of the line to think that."
Host
Full Transcript
In a world of agents, everything is about context and today, we are going to help you build your own personal context portfolio and MCP server. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors KPMG, Blitzy, Robots and Pencils, and Super Intelligent. To get an ad-free version of the show, go to patreon.com.ai-dailybrief or you can subscribe and upload podcasts. If you are interested in sponsoring the show, send us a note at sponsors.ai-dailybrief.ai. Ai-dailybrief.ai is also where you will find out everything that is going on in the AIDB ecosystem and where, and this one is going to be relevant for today's, you can access the companion experiences, which are not for every single show, but are for many of them. You can find that at play.ai-dailybrief.ai and everything that I'm sharing in this episode will be linked there. Today we have another episode in our build week series and boy does this one cut to the heart of building right now. We officially live in the agentech era and agents as we know need context to do their jobs well. And yet context is one of those things that is very simple to articulate and much harder to actually organize in a way that is useful. Now this is obviously a big problem in the context of organizations. Michael Chen from Applied Compute recently dropped an article on X called What to Expect When You're Deploying AI in the Enterprise. He writes at Applied Compute, we spent the past six months embedding inside companies to deploy AI into production workflows, i.e. actually sitting in their offices, filing tickets, reading confluence pages, fighting for access to data, and shipping agents into production that improve over time. There is surprisingly little written about working with large organizations in the age of AI, so this is our attempt to fill that gap. And big and blaring right at the front is one, data ready is just a state of mind. The gap between we have data and we have data in a format that an AI system can learn from is enormous. It surprises everyone, even teams that have already wrangled internal data for incredible companies. Most enterprise data was never structured with AI consumption in mind. It's difficult to imagine a more challenging starting point for a data project which at its core every agent deployment is a hard data problem. Now he's using the word data, but obviously in this case, this is at least partially synonymous with context. One of the big differentiators between organizations that are leading in organizations that are lagging is that the lagging organizations tend to operate without their AI systems having access to context. In other words, they're dropping co-pilot on people's heads and hoping it all works out, which is very different than becoming an AI native organization. Now there are lots of organizations who are working on the context problem for the enterprise. Just to take an example from the last 24 hours at the time that I was recording this episode, Notion, who basically their entire play for enterprise AI is a pitch that they already have your enterprise's context, announced database agents, which they describe as a team of little librarians in your database, keeping it up to date automatically using context from your page or workspace in the web. So okay, we have an acknowledgement that context in the enterprise is tough, and we're even seeing a lot of work on the context that agents can provide each other around their tool use. Andrew Ng recently wrote, Should there be a stack overflow for AI coding agents to share learnings with each other? Last week, I announced Context Hub, an open CLI that gives coding agents up to date API documentation. In our new release, agents can share feedback on documentation, what worked, what didn't, what's missing. This feedback helps refine the docs for everyone with safeguards for privacy and security. So Context Hub for agents is all about the context they need to use tools better. And yet you might have spotted that what all of those efforts don't have is an emphasis on the individual. Now recently, we had a moment where the challenge of the portability or lack thereof of personal context reared its ugly head in the wake of the Pentagon threatening and then following through on their designation of anthropic as a supply chain risk, and open AI's quickly regretted decision to announce their deal with the Department of Defense on the same night. There was a big push over the course of the next couple of days to drop chat GPT and switch to Claude. That was, of course, when Claude hit number one in the app store for the very first time. Now into that maelstrom, the team at Claude released what they called a feature to make it easier to import saved memories into Claude. Switch to Claude without starting over, they promised. And of course, this is a big deal. If you've been investing in either Claude or chat GPT or Grock or Gemini or whatever system you are, over time, it's learned so much about you that the idea of having to explain to a new LLM, all of those things once again, becomes a reason just not to switch. Now Claude's approach to importing memory was pretty simplistic. In fact, all it was is a copyable prompt that Claude wrote that says basically, I'm moving to another service and need to export my data, list every memory you have stored about me, as well as any context you've learned about me from past conversations, etc, etc, etc. Basically, it was a prompt that asked chat GPT to write up everything it knew about you so you could hand that document off to a new chatbot. Not bad, but there's got to be something more, right? Well, that's what we're talking about today. We're going to go through and talk about and build a personal context portfolio. In other words, a portable machine readable representation of who you are, so that in the future, every AI agent, tool or system you use, knows about you coming in, and you are no longer dealing with memory and context based product lock in. So the problem is we've discussed is that every time you set up some new agent or some new Claude project or onboard some new tool, and presumably if you're listening to this show that happens more than infrequently, you have to re explain yourself from the ground up your role, your projects, your preferences, your constraints, even though you like to talk to the machine. And when that was a very occasional switch, maybe that was in the realm of annoyance. By the time you're dealing with three agents or five or 10 agents, though, it's completely untenable. And as you get into the world that we're going into, where every week there are going to be new types of agents and agentic surfaces that you're interacting with, it is going to become absolutely critical to have a way to get out of paying this context repetition tax. Now, importantly, the context repetition tax doesn't just waste time, it also degrades quality. And I guarantee you that even if you have been willing to provide your context to a new agent you were working with, the sheer time and effort it takes to explain everything fully means that there was probably a lot that was left out. The solution that I'm proposing is a personal context portfolio, a structured set of markdown files that together represent you as a context package. Effectively, it's an operating manual for any AI that works with you that knows about your roles, your projects, your team, your tools, your communication style, your goals, your constraints, your expertise. Effectively, it's API documentation, but for you, a single source of machine readable truth about who you are that any agentic system can read. Now, a couple of design principles for this. One is obviously this is going to be marked down first. You might have yesterday just listened to the agent skills masterclass. And even if you haven't, you're probably familiar with this new primitive that is skills. Skills are effectively a folder of information that updates the knowledge based in context for any given agent that is all rendered in markdown files. Every AI system on earth can read markdown. It is the universal interchange format for context. And so the personal context portfolio is going to be marked down first. Second, we're going for modular, not monolithic. This is not going to be one giant about me file. We have separate files and separate templates for separate parts of the whole that is you. This means that you can give different agents different pieces of what they need. It allows agents to grab what's relevant and ignore what's not. It also means which gets to principle three, that this is living and not static. This is not a thing you write once, but it's a thing you maintain or better that your agents help you maintain. As projects change and priorities shift, the personal context portfolio should evolve with you. And again, because it is modular, it's not just that you'll change what's in this initial file set, probably find reasons to expand the files that are actually in the portfolio. Now, obviously, the last piece, which is sort of implicit in the markdown first principle as well, is that this is meant to be portable across everything working with Claude chat, GPT, Open Claw, Gemini, and whatever else comes next. By being marked down first, it is just files, and you can bring them anywhere. So what are the files? I want to stress that this is not necessarily for everyone going to be comprehensive or even the right breakdown. But I wanted to have a clean starting point that would be significantly better, like 10x better than nothing. And so the portfolio template that we've put together is divided into 10 different dimensions. The identity .md file is first. It's your name, your role, your organization, what you do in a single paragraph. This is you distilled down into a page. If the agent can only read one file, you want it to be this one. Next up is roles and responsibilities.md. This isn't your job description. This is your actual lived experience. It explains what your job or your activities actually involve day to day. Can be anything from what decisions you make to what you produce to who you serve to what your week looks like. Current projects.md goes a level down. These are the active work streams that contain in this file status priority key collaborators goals KPIs what done looks like for each. My guess is that this will be the file that changes most often, because presumably from week to week, what is a current project versus a past project versus an icebox project is going to change. Team and relationships.md is the key people you work with their roles how you interact with them what they need from you what you need from them. When you've got agents prepping meeting notes or agendas or one on ones. This is going to be one of the key files that they need. Tools and systems.md is what you use how it's configured what's connected to what rather than agents running off and using whatever tools they think would be useful. This gives them a picture of your stack so they can make sure that what they're doing actually comports with the systems you already have. Communication style.md. Maybe this one seems less important to you. Goodness gracious for me at least. Every time I interact with agents. I'm always surprised at how much this one matters. That could be because I am completely allergic to any hint of sycophancy or fluff or coddling or wavering. Effectively, there are a lot of things about the way that models on average communicate that I very much dislike. And so communication style.md, which can include everything from how you write, how you want things written for you, your tone preferences, your formatting preferences, what you dislike. This is a file that is both internal facing and external facing. It impacts how the agent communicates with you, but it also helps make every output of the agent feel like yours. Goals and priorities.md is a level up from current projects. This is about what you're optimizing for right now, whether the right frame of references this week, this month, this quarter, this year, or in your career overall. It gives your agents the ability to weigh decisions and recommendations appropriately, viewing the work as a continuous whole rather than siloed in the context of any individual project. Preferences and constraints.md is the always do this, never do that file. Then this could be a very diverse set of different things for different people. If you're using agents to help plan your travel, maybe this is dietary restrictions or time zone constraints. Maybe it's about tools you refuse to use. Strong opinions you have about formatting. Basically, this is all the stuff that out of the box and agent is going to get wrong most of the time unless you tell it how to get it right. Domain knowledge.md is your expertise areas, your industry context, key terminology. These are the things that you know that a general purpose AI doesn't. If you work in biotech, this is where the agent learns that you know what a phase two trial is and doesn't need to explain it. Now, this is another one that I think could be very expansionary over time. At the beginning, it might be just a log of what you know, but over time, it might actually impart some of that. So your agents in the future know it too. Finally is decisionlog.md, the history of past decisions and the reasoning behind them. I actually think that this could end up being the most underrated file, because when an agent is helping you think through a new decision, knowing how you've decided things before is enormously valuable. All right, folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their own client zero. They embedded AI and agents across the enterprise, how work gets done, how teams collaborate, how decisions move, not as a tech initiative, but as a total operating model shift. And here's the real unlock. That shift raised the ceiling on what people could do. Humans stayed firmly at the center while AI reduced friction, serviced insight and accelerated momentum. The outcome was a more capable, more empowered workforce. If you want to understand what that actually looks like in the real world, go to Blitzy is driving over five x engineering velocity for large scale enterprises, a publicly traded insurance provider leveraged Blitzy to build a bespoke payments processing application, an estimated 13 month project. And with Blitzy, the application was completed in live in production in six weeks. A publicly traded vertical SAS provider used Blitzy to extract services from a 500,000 line monolith without disrupting production 21 times faster than their pre Blitzy estimates. These aren't experiments. This is how the world's most innovative enterprises are shipping software in 2026. You can hear directly about Blitzy from other Fortune 500 CTOs on the modern CTO or CIO classified podcasts to learn more about how Blitzy can impact your SDLC book a meeting with an AI solutions consultant at Blitzy.com. That's blitzy.com. Most companies don't struggle with ideas. They struggle with turning them into real AI systems that deliver value. Robots and pencils is a company built to close that gap. They design and deliver intelligent cloud native systems powered by generative and agentic AI with focus, speed and clear outcomes. Robots and pencils works in small high impact pods. Engineers, strategists, designers and applied AI specialists working together to move from idea to production without unnecessary friction. Powered by RoboWorks, their agentic acceleration platform, teams deliver meaningful results including initial launches in as little as 45 days depending on scope. If your organization is ready to move faster, reduce complexity and turn AI ambition into real results, Robots and pencils is built for that moment. Start the conversation at robotsand pencils.com slash ai daily brief. That's robotsandpencils.com slash ai daily brief. Robots and pencils impact at velocity. It is a truth universally acknowledged that if your enterprise AI strategy is trying to buy the right tools, you don't have an enterprise AI strategy. Turns out that AI adoption is complex. It involves not only use cases, but systems integration, data foundations, outcome tracking, people and skills and governance. My company, Superintelligent, provides voice agent driven assessments that map your organizational maturity against industry benchmarks against all of these dimensions. If you want to find out more about how that works, go to bsuper.ai and when you fill out the get started form, mention maturity maps. Again, that's bsuper.ai. So that's the 10 files that make up the template of the personal context portfolio. But how are you going to fill this out? You, my friend, live in AI world, so you are certainly not going to write this by hand. My goodness. Instead, you are going to have the AI interview you to get it done. For each file, you're likely going to follow a pretty similar loop. First of all, if you're using something like Claude or chat, you'll probably want to create a project to house this all so that the context of the process itself gets shared across the different instances of these types of interviews. And effectively, you're going to go through a process of interview to draft to reaction to revision and then so on and so forth in a recurring loop until you feel like you've gotten enough information to be going with. Now, because we live in build world, I didn't just want to describe this all to you guys, I wanted to actually provide some resources. So here are a couple. First of all, I've put up the personal context portfolio as a public repo on GitHub. This is going to have templates for all of those files that I just mentioned. And the templates include not only the ultimate output structure that you're going to want, but an interview protocol that you can hand your AI build partner. Each of the 10 files has that interview protocol as well as the output structure. There's also an overall interview protocol that you can use as you're setting up your project. If you want to get a sense of how this might look in practice, there are three synthetic demonstration examples, one for an entrepreneur, one for an executive and one for a knowledge worker. There's also a folder called wiring, which gives some resources for turning this into a Claude project or an MCP or an API layer. And we'll come back to that in just a minute. So hopefully this makes it fairly easy to get up and going. And like I said, all this is available on play.aidlebrief.ai. And I might even put this one actually on the main section of the website. But come on, man, we live in agent build world, we can go a step farther than this. Right? Of course we can. So for those of you who don't want to bother with all these messy templates and interview protocol and all of that, you are just going to use the personal context portfolio app that we built. This is exactly what it sounds like. It's got two sections, an interview, which is powered by Opus 4.6, and the portfolio that it's building persistently in the background. The interview is designed to never be fully done. It works through questions based on the overall goals, trying to fill out all 10 of those portfolio files, but it will engage with you for as long as you want it. If you want to come back, it will continue to talk with you adding more information in. Now, the cool thing about this is that rather than having to break this up into 10 different interviews, like you might have to, if you were using a cloud project, when you answer one question, if it's relevant for different portfolio files, that's all going to be added at once. You can see, for example, here, when I explained what Superintelligent did helping enterprises with AI strategy, it added notes to the identity file, the current projects file, and the domain knowledge file. This speeds things up. Anytime you want, you can download your portfolio. And obviously, this is, of course, totally private to you, completely free, and hopefully a faster leg up to get started. Now, honestly, given that this is just one episode, I should not have spent as much time as I did trying to get the actual interaction right. But I got to say, I think this one is pretty useful. So you should go check it out. The only reason, by the way, that I'm not giving you a dedicated URL right now, outside of the podcast website, is that I'm not sure what dedicated URL I'm actually going to use for this. Now, once you've got your portfolio downloaded, the last piece of the puzzle is how you make it highly transportable. Now, to be clear, you don't necessarily need to do this step. If you host, for example, your own personal context portfolio on GitHub, many agents are going to be able to interact with that and use it. Plus, if you have the folder of Markdown files, you're going to be able to drop that in any chatbot. But for the sake of exploring more advanced modes, let's now put your personal context portfolio into an MCP server. Now, for this, we are going to lean heavily on what I think is the single most important advice that I give anyone about how to learn how to use AI, which is to lean on the AI as your tutor and build partner. I've been managing this whole endeavor as part of my AIDB training project on Claude, and I had gotten through the entire process, and I could tell as I was transitioning from the part of the project where I was getting these templates up on GitHub, to the part of the project where I wanted to put my personal context into an MCP server, that I was exhausting Claude's context window. For me, that usually manifests as it getting short and kind of lazy. And so I had to write a handoff that was specifically about this MCP goal, and we dove in. And pretty much all the time that I spent on this was going back and forth with Claude to help me figure things out. Now, the first job of this was Claude wrapping its head around exactly what I wanted out of the experience, whether it was read only or read write, what the auth model was, whether it was a combined resource or the individual files. From there, it produced this massively long document with all the steps, which looking back now were the steps that I would ultimately go through, as well as this particular bunch of code that I would use alongside a read me and a couple other documents. Now, for the purposes of both the podcast and my own purposes, I said this is 1000% too complex, walk me step by step through creating an MCP server, and I'll figure out how to explain it. And to put a fine point on this, AI is zero judgment. There is no risk of you looking or seeming dumb, because there's no one on the other end of the line to think that. When you're trying to get something explained step by step, even if it tries to race ahead, demand that it go back and do things more simply. So in our case, from there, Claude got way basic. It reminded me first what an MCP server is mechanically, a program that responds to a specific protocol, an AI tool sends it a request saying what do you have and it responds with a list of resources. The tool says give me this resource and it responds with the content. So of course, in our case, the AI tool wants to know more about you or your projects or your team, and the MCP server has all of those resources at the ready. Now step two, it divided the way that an MCP server can run into the two categories of local or remote. Is this all just for things going on on your machine? Or do you want yourself or others to be able to access it from anywhere? Ultimately, I wanted to do both so we dove in. Now once again, it immediately tried to not go step by step but to give me a whole bunch of information at once. And I had to remind it to slow down. This is the process that I would recommend you follow pull up Claude or chat, you PT or Gemini or whatever your LLM of choice is, once you have this personal context folder, and have it walk you step by step through how to set it up first, remotely, and then on the web. And one thing to keep in mind as you're doing that is that the vast majority of the time I spent on this was sharing screenshots of things that went wrong and asking it to help me figure it out. For example, this little message one MCP server failed. A lot of the work is troubleshooting. Now in that case, we figured out that port 3000 on my computer was already taken. So it was a relatively easy switch. But that's the type of thing you're going to experience as you go back and forth on this. Another small tip. One thing that I've noticed is that when Claude or chat, you PT are giving you some code that you need to run somewhere or copy paste into cursor or VS code or something like that. Once they've given you the initial block of code, they'll often say now just change this one thing. I have found personally, that a lot of the errors that I run into are accidents in the copy pasting of the changing of that one thing. And so I will frequently say, even if it's repetitive, when you're asking me to change one line from this whole 77 line document, just give me the whole new 77 line document so I can copy paste the entire thing at once. Couple other errors we ran into one of them was a file naming mismatch. And after that, pretty much things were running. Finally, after about 10 or 15 minutes, we got to the point where I could say, what do you know about my identity? And it was able to pull up the identity file. Ultimately, this was actually a very small amount of work. Almost all of the time was in the troubleshooting. Now to deploy it remotely, there were just a couple more steps. First, we had to create a GitHub repo. Next, we had to make sure all the portfolio files were copied into the project. We had to change a line or two in the server code. And then step by step, it told me exactly what to do to get everything pushed up into GitHub. We were able to deploy it using Railway, which took basically no time at all. The jump from local MCP server to something that was available on the web actually took less time than the local, just because we ran into fewer issues. My recommendation is that it's worth taking the time to work with an AI build partner like Claude or ChatGbt, to try to go through this process, even if you think that in this case, you're not sure how useful this particular MCP server will be. I do think that a lot of the value you're going to get out of the follow along for this is going to be just in the creation of the files, which is of course why I spent most of my time building out the context portfolio interview agent. But it is a really great way and a pretty simple and clear context to learn how to use MCP. And so if you haven't yet, give it a try. Overall, though, that is how we go from endlessly repeating ourselves, telling AI about ourselves and our projects and our teams to doing it once, allowing it to stay updated and giving every agent and AI that you interact with access to the same pool of information. Hopefully this was a useful one. Have fun this weekend trying it out for yourself. For now, that is going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.