Brad & Will Made a Tech Pod.

325: renderDEEZ128

104 min
Feb 8, 20262 months ago
Listen to Episode
Summary

Brad and Will discuss their home lab setups in detail, comparing Brad's custom Linux NAS running Debian with ZFS to Will's Synology NAS and separate low-power B-Link server. They explore the trade-offs between appliance-style operating systems like TrueNAS and bare Linux, covering containerization with Podman/Docker, backup strategies, and the services they run.

Insights
  • Moving from appliance OS (TrueNAS) to bare Linux provides flexibility but requires significant operational overhead and manual management of scheduling, backups, and system updates
  • Containerization with Podman/Docker offers better security and resource isolation than traditional LXC jails, especially when running unprivileged containers with SystemD integration
  • Low-power x86 mini-PCs (N5105, Jasper Lake) now offer better value and capability than Raspberry Pi 5s at similar price points, with broader software compatibility
  • Network file system performance is critical for real-time applications; working directly off network shares causes unpredictable issues with Audacity, OBS, and other tools
  • ZFS snapshots and replication provide robust backup capabilities but require manual setup and ongoing maintenance compared to appliance solutions
Trends
Shift from BSD-based NAS appliances to Linux-based bare metal for power users seeking greater control and flexibilityContainerization becoming standard for service deployment, with Podman gaining adoption as a more secure Docker alternativeLow-power x86 processors (Jasper Lake, Alder Lake) replacing ARM-based solutions for always-on home servers due to better software support and cost efficiencyGrowing complexity of reverse proxy configuration (Nginx, Traefik, Caddy) as web services require WebSocket and advanced routing supportIncreasing emphasis on security isolation through unprivileged container execution and ACL-based file permissions in home lab setupsZFS adoption in Linux environments for home labs, despite complexity, due to superior data integrity and snapshot capabilitiesDistributed backup strategies using tools like Syncoid for off-site replication becoming more accessible to home usersSystemD integration with container runtimes (Podman) simplifying service management across heterogeneous workloads
Topics
TrueNAS vs bare Linux NAS comparisonZFS file system configuration and managementPodman containerization and rootless container executionSystemD service management and integrationDocker image deployment and configurationSamba/SMB file sharing across Windows/Mac/LinuxNFS vs SMB for network file accessBackup strategies with ZFS snapshots and SyncoidPlex and Jellyfin media server setupHome Assistant automationGame server hosting (Minecraft, Satisfactory, Valheim)Reverse proxy configuration (Nginx, Traefik, Caddy)Low-power CPU selection for always-on serversHardware transcoding with QuickSyncSecure boot and UEFI configuration with ZFS
Companies
TrueNAS
Brad's previous NAS OS; discussed limitations that prompted migration to bare Linux
Synology
Will's current NAS solution; DS1520 Plus with Celeron J4125 CPU running ButterFS
Canonical
Ubuntu/Debian maintainer; LXD container platform discussed as alternative to Incus
Red Hat
Podman and SystemD ecosystem developer; mentioned as robust alternative to Docker
Oracle
Acquired Sun Microsystems; historical context for ZFS file system development
Plex
Media server software; discussed hardware transcoding requirements and database migration challenges
Jellyfin
Open-source media server alternative; recent major update with database redesign mentioned
Ubiquiti
Network equipment manufacturer; Ubiquiti console runs on Will's Synology
Amazon
Kindle e-reader manufacturer; discussed aggressive auto-update policies and DRM changes
Apple
AirPlay ecosystem provider; Own Tone server discussed for AirPlay speaker integration
San Francisco Public Library
Library system with robust e-cookbook collection accessible via Libby app
San Mateo County Libraries
Library system with extensive e-cookbook collection accessible via Libby
Debian
Linux distribution; Brad chose Debian stable for server OS due to stability and testing
Intel
CPU manufacturer; Alder Lake i5-12600K and N5105 Jasper Lake processors discussed
Asus
Motherboard manufacturer; W680 workstation board with ECC support mentioned
People
Jeremy Allison
Samba maintainer/founder; previously interviewed about ACLs and file sharing protocols
Jim Salter
Sanoid/Syncoid creator; ZFS snapshot automation tool developer and Ars Technica contributor
Phil Plate
Cloud photography enthusiast; mentioned as regular poster of cumulonimbus cloud photos
Quotes
"I just kind of got tired of chafing up against the limitations that an appliance type operating system puts on you"
Brad~30:00
"The nice thing about TrueNAS is if you had a bad update you literally on the boot menu could just pull down the last like five states of the machine"
Will~45:00
"I never think about any of this stuff once it's set up. I just use the server as is."
Brad~1:15:00
"You put the work in to set it up, and then when you're done, you're like, alright, that's done. I'm going to go do something else."
Will~2:45:00
"Running services as root is dodgy from a security perspective"
Brad~1:50:00
Full Transcript
Okay. I just did some edge of the seat beep work there, Brad. Yeah? What does that mean? Well, I started the recording at like six seconds to go. I usually try to start it at about two seconds to go so that you don't have to like, you know, that I'm the short one instead of the long one. Same. And I realized I had to delete. I had to stop it, delete the file that was there, the line that was there, and then restart it with like two seconds to go. And I got it with maybe half a second on the first beep. the hidden beep etiquette that emerges when you podcast remotely all the time. We're talking about the little, I think we refer to it internally as Beep30. Yeah. Which is the tool that was built for us by Thristheart. Is that how you say his username? I think that's right, Thristheart, yeah. On the Discord, which is the thing we use to sync our recordings together. Anyway, yes, I also, I always hit the record button like right before the beep starts so that it's right at the beginning. And especially for if I'm not editing, even when I am editing the podcast, it's useful. but especially when i'm not right there at the front you don't have to look for it no no you you don't want to have to scroll a minute in and then find it that's no good but um but that said i know exactly what the waveform of those beeps look like it's like burned into my memory at this point oh yeah it's the most important waveform i ever see um i was gonna i was gonna ask you i don't actually remember what i was gonna ask you now because we started talking about the beeps and i was like there's a mosquito in my office that i've got to deal with the the the prospective cold open topics discussed right before we started this where there's a mosquito in your office and you're down to two micro usb devices oh right the micro usb devices so i have a perfectly good usb like pd2 oh no i see you holding it up right now pat pack i see where this is going this is yeah i already am experiencing the pain of whether this is worth it or not yeah it's it's great it will charge like the entirety of anybody's phone in the house almost i think the i think gina's big iphone max has a bigger battery than this but like my daughter's mini and my older iphone max both will will fully charge off of this thing it's a micro usb charger and i have there's literally on my charge station over here to the right there's one yellow cord remaining that's for like the three micro usb charging devices i have one thing that's let's say a legal drug paraphernalia that's micro usb i see it's it's like a it's like a flower vape it's nothing it's nothing weird but yeah um and then i have that i don't really use all that often so it's not really an issue and then i have this battery backup and i have one other battery backup but the other battery backup has both usb micro c and a micro charger so it's fine it gets to live it can charge off either port it has two two ports for charging and two ports for draining oh i thought typically the batteries only had one port that would charge that's cool yeah so it's it's from a weird time it's from like right as the as the changeover was starting but before like ios and switched over ios devices and switched over um so what's the quandary is it whether to keep the micro only bad i just saw the mosquito fly right across your face it's making me crazy you're fucked yeah you're you're boned i'm sorry to tell you everything's collapsing here man it's bad um what is it is is the question whether to get rid of the micro only battery or yeah the question is is it is it like i generally on stuff like batteries i try to keep them until either they get puffy and stop working or they stop working right because like there's no downside to having a battery that's like 80% capacity or whatever. Well, I would say the same thing is true of a battery that is limited to micro USB. Let's say batteries are too useful as a backup. I mean, I don't know about you. We don't have that many power outages around here, although certainly more than we used to. But it's just too useful to have extra batteries around. And even if you never use that one hands-on, even if that one is just a charge it once every six months and leave it in the drawer for emergencies, I still feel like that's not a good reason to get rid of something that's otherwise quite useful well so the problem with this one is because it's the smallest kind of crappiest one that's in in service still it's the one that when the kiddo is like when i'm like hey did you charge your phone last night she's like oh yeah i got it i'm good i'm at 18 percent we're getting ready to go out of the house for three days three hours i'm like no you need to take a battery back up and plug it into your phone in your bag and she's like oh i don't want to do that has she ever actually taken a battery with her oh yeah yeah like when she goes to when she goes to class or something and she has like 10 battery it's always dead could borrow battery and i don't want to give her like the hundred dollar big chunky one that'll do laptops and is good for like four or five phone charges a because it's heavy and big and b because it was kind of expensive and it's like that's like our disaster preparedness one right like that's the one that i can take out like if we have a bad problem the power's out for a week again like it was in 2020 when the fires happened um i can take that one out i can charge it off the car battery and then bring it back in to top up everybody's phones and stuff so i feel bad getting rid of it yeah i feel like it's wasteful sure i kind of i kind of feel well you could give it away to somebody or i'm not going to give somebody a usb micro usb somebody might take it i mean all i mean is if you don't want to waste it if you don't want to literally junk it you can find somebody who would use it if you want to feel like it's still being put to use maybe i should use it for like a project maybe i should get something that's like i saw i saw one of those um e-ink kind of ambient display screens the other day and one of the options for that was to hang a usb battery in it and like i bet that this battery would run that thing for like four months oh yeah this little power as it draws my kindle i have not i read my kindle every night sometimes only i usually only make it 10 15 minutes before i fall asleep but yeah i haven't charged my kindle in like six months or something and I just noticed it was down to like 40% the other day, but still, that's like six months of Kindle usage. So yes, and that's with a backlight. So I think like a non-backlight e-ink screen would probably go forever. Do you put your Kindle on airplane mode? Sorry, yes, I should point out. It's always an airplane mode unless I need to sync to it because the battery lasts so much longer. Gotcha. And also, this is a topic for another time, but Amazon has gotten incredibly aggressive about pushing auto updates to that thing. I almost always just read on my phone or my iPad now Really? But I also I tend to be I find myself in places where I have like 10 minutes Where I'm like waiting The classic example is oh I have to have the oil changed I'm sitting there waiting for the oil change to be done But that's not really a problem with the electric car anymore But yeah Like if I'm in a doctor's office or something I always sync Because I always want to have my latest location So I can just pick it up on the phone Yeah I get it the magic for me is also I'm doing almost exclusively Libby these days. I don't think I bought a book from Amazon in a year. It's actually, I know when the last time I bought a book from Amazon was, it was when they did the, the changeover on DRM stuff, the DRM changeover. Cause it's like, I also same have not bought it. I, I, I successfully ran whatever that little JavaScript applet was pulled all my stuff off, which I have archived now. And I probably will never buy another book from them because of that. also shout out to Libby for cookbooks I didn't realize this but like San Francisco Public Library and San Mateo County Libraries both have really really robust e-cookbook collections so like when I was looking at the Tartine Bakery the fourth edition of it because I have the second edition and the recipe I wanted isn't in that one it's only in the fourth edition it's like a $50 cookbook and I was like man I'm gonna have to buy another I don't want to have another $50 cookbook on my shelves. And then I was like, what about Libby? I searched Libby. It's there. Nobody had it checked out. They had like eight copies of it and I got it. I copied and pasted the recipe out and I put it in my, in my paprika and then I just returned the book immediately. It was great. Wait, hang on. Is that a Libby interface to a local, like physical library system? Or is that just some like cloud Libby cloud library that does not actually exist? So you could do, you can do both, right? Like there's virtual libraries that you can connect to with Libby. you can also connect to your local system. When I check out an e-book from the San Francisco Public Library, I add it to my Kindle with Libby. Have you never done this? I assumed as a library aficionado, you would have been on this train a long time ago. I've mostly been reading the same three. Like I said, I use books largely to fall asleep at night. I wish I had time to read more, but I kind of have been reading the same three books off and on. They're like short story collections and one very long history book. I mean like a 700 page history book for a very long time so I just have not obtained any new books you're still looking around at that mosquito fucking mosquito anyway I have not really used the SF library since they started transitioning off of they were used to be on like overdrive oh and it's dope yeah I should try it I should get back into that it took like five minutes to connect my card to the system just to close the loop on this whole cold open because we got to start the show and this has been like five cold opens in one. The real reason I fell off of messing with the Kindle is the last time I tried to jailbreak it. Oh. And to go back to the auto-updates thing, maybe we should do an update on a future episode about this. I washed out of the jailbreaking my Kindle process because it auto-updated in the middle of it. Oh my God. To a version that could no longer be jailbroken. Oh, that stinks. And that kind of soured me on the whole experience so much that I'm not using the Kindle as much for the last little bit. so uh you saying you uh tried to jailbreak your kindle but you ended up putting your kindle in jail yeah it's more like kindle jail broke me i don't know what's podcast Welcome to Brad and Will Made a Tech Pod. I'm Will. I'm Brad. Hi, Brad. Hello. Hi. This week, we're doing something a little bit different. We haven't done this. We haven't talked about this in kind of a long time. We also kind of talk about it a little bit all the time. Yes, you're not wrong. but honestly, the genesis of this episode is by user request. That's true. We're going to do kind of a, I guess this is our beginning of 2026, kind of home lab update. I have to confess, I don't love the term home lab. I don't hate it. It communicates the concept well. Actually, what I really don't like is having spent a lot of time on the TrueNAS forums in the past where people post gigantic SIGs of their three massive servers with 96 hard drives each. Look, SIGs are always bad. And like the term, the term just evokes like, what's the concept I'm trying to say here? It's not, it's not a, it's not a form of a function thing, but it's kind of analogous to that. Like a, well, what is, what is the term for like, like amassing way more capability than you actually need? Like way more overkill. Sure. I mean, sure. There's a, there's a, there's an arms race in the, like, if you go to our slash home lab or one of those places where it's like somebody posts their 36-inch tall rack, a 24-U rack, and then somebody else is like, hey, check this out. My work was throwing this out, and I got a 48-inch, 48-U, 12-foot tall. It's going to be 1,500 pounds when I put all the computers in it rack. My one petabyte storage server, you know what it is? It's hot rotting. That's all it is. That's all it is. It is just tricking something out way beyond any practicality or necessity. But I mean, I think that there's another. So there's multiple aspects to the whole home lab thing, right? Because the other thing is people use them as learning places. And for example, back in the old days at Maximum PC, we never, our business people, our ad sales people are always like, hey, could you do the thing that PC world does and have like a big info world type? Can you do some server testing and stuff like that? And we're like, no, we don't have that. We're not set up to do that. It's too expensive. Like we don't have the capacity to build the 300 machines we need to test a web server effectively. Right. Or whatever it was. And the home lab community has kind of used. I don't know. They're not special purpose, but like used computers and stuff like that and come up with some best practices that let you do that kind of stuff and build that kind of expertise. expertise and build like hey i'm building a big giant infrastructure web infrastructure or server infrastructure so that you have a place to practice that and test that and fool around with that as a personal person which is cool totally i mean a okay a to be clear people can do whatever they want with their money and time b if they're actually you know making practical use of the thing they have built even better yeah but i i just personally don't want to feel like i am building all this stuff out just to sit there and admire it and like i kind of and try to make sure I'm actually making use of what I'm doing. Like, for example, I kind of overbought hard drives. Yeah. Year before last at Black Friday, I expanded my storage pool. Yeah. To the point that I like it's still sitting there half empty because I at some point I finally realized, you know, I may have bought too many hard drives at this point. But so to me, it's like it's like the people who are the hot rodders, the people who just want to have like the biggest, baddest machine set up. Right. And then there's the piracy aficionados. They're running a whole stack of ours. So those people with petabyte storage volumes are actually using it for something. Yeah. Like the ethics of that something aside. Yeah. It's like it's like Steve Steve from from North Carolina wants to roll his own Netflix. Got it. I understand that. It's not for me, but I get it. And then there's the people that want to learn. There's like the people that want to they're like are you building stuff so they can get certifications or advance in their job or whatever, which is great. Kind of that kind of I guess that's kind of where I fall these days. I mean, I'm not trying to get certification. But like I was going through some networking stuff in the network channel the other day and people start talking about having done that stuff for their CCNA. It was a bunch of IPV6 stuff that I was poking around at. And it was just like, OK, I guess I guess that's kind of what I use this stuff for is I just like learning. Well, and then there's also people who want to kind of get off of the public, like the corporate clouds. Right. That's also who want to divest of Google. I think I fall in that category a little bit or at least have a have a have an alternative to in case you decide you want to pull the plug entirely. yep um and and so okay so there's there's motivations that's what people are doing yeah so so real quick this this all started from james on the discord adding me a question which was hey can you run through your linux nas like your linux server software stack because i run true nas i'm speaking as james here this is james saying i run true nas but i'm like kind of over it i'm kind of thinking about moving to something else and i'm enticed by this idea of just running a bare operating system and doing everything yourself, but I'm also kind of afraid or intimidated by it. Are you going to get it? No. He's leaning back. You got to stop asking me about it. It's an audio podcast. I know, but this is like, I know it's high drama. This is action cast. Yes. High drama is the child loves the lights. I'm going to go and tell you it loves my lights. It's a flying bug. Yeah, of course it does. Uh, I need a bug assault. It's hard. It's hard not to interject when you, when you assume the pose, your hands came up in the clapping formation as if you were about to get it and I just couldn't help myself. All right. Anyway, James was basically asking, hey, can you step through like the software stack and how you've configured everything on your Linux box to be a NAS slash server without the aid of an appliance style operating system? And that was kind of the genesis of this episode. But we're going to kind of expand it out a little bit and just sort of do a quick home lab overview of everything. Yeah. Well, I thought it was interesting to kind of do a reset because like long time listeners will know that in a lot of ways, this podcast started with us talking about home labs. We started talking about true NAS or free NAS back then. Sure. Yes, because this podcast started. I built my true NAS machine in April of 2018 and we started this in September of 19. Yeah. The origin for a lot of this was you and I were spending a lot of time on discord talking to each other about server stuff. And finally we were like, I was like, Hey, do you think CBS would let you do a podcast? You were like, I don't know. Let me ask. And then you asked, and then it took them a year to decide. And then we did a podcast. Yeah. Yeah. And you and you and Vinny were the, the people I knew running free NAS. Like that's the whole reason I got into that in the first place. I'd been, I'd been running free NAS at that point by like for like five or seven years. I don't know. Since I went from windows home server to free NAS, as I recall. So which became true NAS to be clear. Yeah. The classic path. But anyway, the thing that's happened, though, is we've done like our stuff has evolved. You know, when we started this, I was running True Free NAS on a Broadwell E, like my old gaming machine, which was a Broadwell E 8 core machine that was like using 500 watts of power at idle with no hard drive spun up or something ridiculous. And our setups have evolved. Our hardware has evolved. The services we're running have evolved. And we haven't really talked about it as a whole in a long time. Uh, so rather than make people listen to 300 episodes and piece together the gradual evolution of what we're doing, we thought it was useful to like run down the, run down the stacks and kind of talk about it. Yes. Should we start with my server? Yeah, we can start with your server. Since that was the question. Do you want to go, you want to go server, you want to go one thing, you want to ping pong back and forth? I don't actually have a ton of other stuff, so I'm mostly just going to focus on my server. I will just like briefly run down the other boxes doing stuff in my house at the end of this. but I've talked about those enough recently that I feel like just touching on that should be fine. But there's a lot of meat to administrating this big Linux box. Yeah, the interesting thing is we've both taken different approaches because you've gone with one monolithic server. I have a bunch of stuff spread out over a bunch of low-power devices. Right, and you're a bit more appliance. Well, I think your kind of VM host is just a bare operating system. We'll get to that. Yeah. Okay. All right. My current server is an Alder Lake Core i5 12600K. I'll just briefly run down the specs here. I've got a very good deal on that i5. Like very good. Or otherwise I probably would have gone for like an i3 something lower power. Well. Go ahead. I was just saying having gone with the really low power solution it's nice to have extra compute. Occasionally compile software on there and stuff like that and it's got a quick sync. Anyway. Okay. So it's an i5 12600K. There's 64 gigabytes of ECC DDR5 in there. ECC is error correcting RAM. I had to get an Asus workstation board with a W680 chipset. I bet that was expensive. Actually, it was about 330, which is very cheap in that world. I wouldn't have gone for the ECC until Asus rolled out that lower end workstation board is the reason I went for it. And the RAM was barely more expensive than regular RAM at the time. Also, this was three years ago when RAM was a commodity, like a dirt cheap. Yeah, yeah. Those were good days. Yeah, a ton of hard drives in there. I've talked about it before. There's eight hard drives mirrored in the main pool. So there's only four drives worth of space, which comes out to 65 terabytes. Good God, man. Yes, yes. I have been amassing Black Friday hard drive deals for a couple of years. And like I said, it's too much now because I don't actually need that much, it turns out. But anyway, there's also a couple NVMe drives in there that I use as scratch space. What do you mean scratch space? Like, um, kind of anything, anything and everything. Uh, like I record, like when I record videos and OBS for, of streams or like when I record the ramble cast, which I do a video of every week, like it records to that rather than the spinning drives because the, because no access time. And I don't have to worry about like anything seizing up in the, in the IO chain. And you're directly connected from your desktop PC to the NAS with like a high speed connection. I'm over this 40 gigabit connection to the NAS. So it's pretty, pretty robust for sending large amounts of data. But frankly, that's that even that amount of data could go over one gigabit link pretty much. Well, you say that, but like I find that OBS is really sensitive to network latency so that like if you if the machine, for example, when I tried to run all of my my OBS setup stuff off of the NAS booting up OBS, sometimes it would just be weird because it would like the drives would be spinning up or it would take a second to find a file or something. and it would completely, it didn't like that in ways that caused unpredictable bugs. Yeah, I've never had an issue with it, but I also use that scratch base. So those are two terabyte NVMe drives that currently are mirrored because I basically decided that everything in this machine should be mirrored since I use it for work and other kind of critical stuff a lot, and I just don't ever want to drive failure to take anything down. Because there's not a lot on that mirrored volume, NVMe volume, that is super duper important, all those recordings are pretty important, But I have considered striping that volume, which would make it four terabytes because striping is no redundancy versus anyway, two terabytes, still plenty of scratch space. What if you added a third NVMe and then did parodies? You could have. I could do that. Yeah, I could do that. Think about it. I thought you mentioned it, but I'll also use it for things like if I download like. Like I downloaded a ton of PlayStation ISOs for Mr. Stream a couple of years ago, like a ton of them. and it's nice to have an extra high-speed drive in there to like for example like if i downloaded let's say like a couple terabytes of of disk isos uh reading those and writing them to because they're all compressed they're all like seven or whatever reading and writing those to decompress them on the same drives is pretty slow being able to deal like have an intermediary drive in between from one direction yeah it's way way faster to have have that extra drive space in there so anyway That kind of stuff I all do on my desktop because of that. It's too slow going across the gigabit to the network. Lastly, two SATA, two cheapo $20 SATA SSDs in there that are mirrored as the operating system volume. Okay. So that's your root and then your hard drives. Are your hard drives, is the big stack of hard drives in a ZFS? Yes. Yeah, it's all a big ZFS pool. Wow. Okay. Okay. So why did I stop using TrueNAS? Yeah. hey Brad why did you stop using TrueNAS? Well I'm glad you asked because this is kind of what James was getting at like the simplest way I can explain it is that I just kind of got tired of chafing up against the limitations that an appliance type operating system puts on you and like those to be clear those limitations are there for a reason in fact they are the whole reason that an appliance style thing works in the first place. The strength of the appliance style OS. Yes. So when you say appliance you mean stuff like like Proxmox or Unraid or TrueNAS or it's like something that is like you install it and then you manage it probably through a web interface or an application rather than like a Linux console. Right. And crucially, it is a layer. Those things are just layers on top of the same technologies we're about to talk about, the same software stack. It's just that, like you said, they give you a nicer interface. They unify everything a little more. And crucially, like I said, they put up a bunch of guardrails that you're not supposed to go outside of because they need a known state for the system in order for everything to continue working together. And so you can do things like update and stuff like that. That's exactly it. So for example, TrueNAS, you're not supposed to go tinkering around with the underlying operating system because even in the best case, a TrueNAS system update will just wipe out all the changes you made. In the worst case, you will break something irreparably and have to start over. Yeah. This was, I think, one of the things I actually really liked about TrueNAS. Yes, it is 100% just a philosophical approach. Do you want to be a mechanic and being on your back under your car, tinkering with carburetor and changing the oil yourself? Or do you want your car to just go? Is kind of, I guess, the best metaphor. So I dropped TrueNAS, this is while we're talking about it. I dropped TrueNAS for around the same time, but for completely different reasons. um my uh hardware start started getting janky and uh my broadwell machine started getting wobbly and when i was looking at building a new machine for that i was like oh let me look at the power requirements for a desktop pc and what i'm paying for electricity now and i was like oh i don't want to do that anymore what's what's the less power consuming option here um and i went i went out and bought a Synology NAS, honestly, because I was like, oh, this has a decent four-core Celeron or something in it. It has QuickSync. It has all the things that I thought should let it run most of the applications that I was running on that Broadwell machine without a whole lot of... Maybe things like compiling updates and stuff would be a little bit slower, but the day-to-day operations would be about the same. Now, in practice, not true. The Celeron, the NAS that I bought was a little too underpowered for that. But the the in terms of like power consumption for serving files, it's way, way, way, way, way less money every month. Sure. For the like to the tune of like 20 bucks. Yeah. Like I would guess the Synology is not using that much more power than it takes to spend the hard drives Obviously there is overhead to run the electronics and the CPU and stuff in there but like contrast with my 12 600 K that probably using It's probably a hundred Watts at idle. Yes. I guess. And I'm, or I'm thinking even without the drives, without the drive. Yeah. Yeah. It's, it's, it's not insubstantial. It's, there's a reason I talk about wishing I had solar a lot. So the, the Celeron, uh, it's a J 41 25, which is a mobile Celeron in this one, uh, gemini lake refresh which i think that's after alder but i can't remember is a 10 watt tdp so it's like wow it's like yeah yeah versus versus mine i don't know what it is at idle but i mean the i think that 12 600k is like 150 watt tdp i mean it's not running at that all the time of course but yeah um okay should we get into the software stack this is the thing james james was actually asking about okay so i basically just got tired of not being able to like do whatever i wanted on a machine that I owned. And again, if you just want a thing to work, the appliance stuff makes perfect sense. But I was constantly going like, man, I wish I could install this and use it, but it's not part of what they have integrated into this thing. Or like, oh, I wish this scheduling thing worked better, et cetera, et cetera. Or like there's a new piece of software that I want to try. It's just you couldn't do that. Yes. You know what it really was was their scheduling for like backups and scrubs, which scrubbing in ZFS terms is like checking the data integrity on the drives like their scheduling was not as robust as i wanted for for me the big challenge with the bsd because the thing about the time that we switched over at the time i switched over at least i was running true as the bsd version yeah and which they are like killing us is we don't need to get into this tangent but they are basically killing off the free bsd product at this point i mean it makes sense yeah yeah um yeah because the problem with it was stuff would mostly work except for when it didn't right like you could you could probably cross-compile linux a piece of linux software for it but it would be janky or weird or you'd have to like jump through a bunch of hoops and you could run another you could run a linux vm which i did for a little while but that's extra overhead and that's extra overhead yeah which which was not great either although to be clear the scheduling thing i was talking about is is true nasa's like UI middleware. It's just the one. Yeah. I was just like, I wanted to be at like, you know, I wanted to only scrub on the first Sunday of every second month or something like that. And they wouldn't let you do stuff like that. And then I started thinking like, you know, that wouldn't be that hard to just script that yourself. If I have the liberty to do that, I've just used the word Liberty. This is getting too ideal. It's all about freedom, Brad. This is getting too ideological. I'm afraid, but anyway, okay. Okay. So I, I decided to go bear operating system. I, I actually was going to move to free BSD first because that's what I was used to, but it had basically zero support for the Alder Lake at the time, three years ago. Of course. And looking it up recently, they are only now starting to get like full heterogeneous core scheduling support integrated. Like FreeBSD is cool, but it just does not have the development support to even come close to catching up with Linux in terms of hardware. Yeah. So that's what drove me to Linux. It was like, all right, it's time. This is what the world runs. This is what all the software is made for. It's time. I mean, look, there's a real argument in this space for doing the thing that's like if you want to learn a bunch of stuff then you should do the thing you want to learn right yeah but if you want to have something that's a usable functional computer there's a real argument for doing the thing that everybody does yep yep you're not wrong you're not wrong okay to defaults yeah what are you doing for file system uh well so it's still zfs i'll so i'm running i'm running debian stable okay debian 13 trixie is the one that just came out last fall so i'm on that now uh i picked debian because it's stable it says it's stable right there in the name but like it is like that it is known to be the most like boring slow doesn't update often and everything is supposed to work together properly and tested well distro out there pretty much i think i think in this case that's a strength not a weakness so i i am very much in the camp of wanting a server to be boring and stable and reliable uh that said i've been through this is my third or four third third debian i guess since i made this move and like every version of debian there's at least one major package that they don't have packaged or is way out of date that i really wish i had access to and don't so there's trade-offs with everything of course i could move to a more aggressive distro i just have not um yeah so so i am still using zfs because that's what i was used to from true nas cfs is the big enterprise you know check summing file system that prioritizes data integrity and stuff like that we don't need to get too deep into that for the moment um but so you can install zfs straight from debian repos that's not a problem you do have to like agree to hey this license is not gpl are you okay with that but that's fine why did you did you say that you installed zfs on the root so so yes i somewhat controversially in the linux channel on our discord i am even running zfs as the file system for the root volume. I don't know about that, Brad. That requires a lot of jumping through hoops. Probably. Yeah, that seems like a pain in the butt. That's probably outside the scope of this episode, frankly. Yeah, I mean, I get so the benefit of ZFS is it's similar. It's a journaling file system. You can roll stuff back. Yeah. Like if you or it's copy on write is actually the term for what it does. So actually maybe to put it in terms that say like dual boot diaries listeners might understand. Butter FS was kind of made in the spirits of ZFS. ZFS came out of Sun like 20 years ago before Sun got gobbled up by Oracle. Butterfest is very much doing the same type of stuff that ZFS started. In terms of snapshots and volumes and stuff like that. Logical volume management, the whole thing. BcacheFS, which we've talked about, is also very much trying to implement a similar feature set to ZFS. It kind of abstracts out a lot of file system stuff so that you can do things like roll back to previous versions of files or roll back to a previous state of the entire file system stuff like that for people who don't know about this it's also and again again a little out of scope here it is an entirely vertically integrated file system and volume management tool i don't know what that means well so as opposed to traditional file systems where like you need you need a raid management layer underneath the file system layer oh right these these copy on write file systems do everything they handle all the partitioning all of the volume the physical volume management and the file system stuff on top of that and the distribution of data across the physical disks right yes so so you can do things like just have a bunch of different size disks jammed into your zfs pool and they work theoretically yeah yeah yes there's there's management there i mean so this is one of the things james was asking about because this is one of the things that true nas both abstracts for you and also makes relatively friendly with the web ui uh for in fairness unraid and prox box unraid also does this prox box i think doesn't unraid got zfs fairly recently in the last couple years i think proxmox i believe does have zfs support of some kind i'm not an expert on that stuff but but yes to address the point i have had to learn a lot about the zfs command line tool chain i mean a lot like perhaps more than i i shouldn't say more than i ever wanted to know because i love knowing this stuff like it's kind of the whole reason i'm here yeah but i yes you will spend a lot of time understanding understanding how this stuff, I mean, it's cool if you enjoy this stuff because I think that the tooling around ZFS that's provided at the command line is incredibly elegant and does some really cool stuff, but it is a lot of learning. See, I looked at this, I looked at the work you were doing on this and I was like, I'm going to go buy a thing that I can configure with a web interface. Yeah, totally valid. I mean, you have to be down for this as a hobby if you're going to go this route. Running that root volume on ZFS required me to basically do a manual install of Debian. uh it's it's very similar to the arch install if you've ever done that it's all command line based it's like the old school debian install right yeah you basically have to build up the kind of cfs volumes manually from a like a live cd and then manually install everything um having having done a fair number of linux installs in the last year six months we go ahead and tell you i don't like the partition manager in any of the graphical linux installs because they don't give you granular enough control over anything and you always have to resize shit so that like hibernate works or or whatever. It's one of the reasons I kind of washed out of tinkering with desktop Linux. God, it's been almost a year now since I was fooling with that. But yes, like the Fedora installer, for example, made way too many opinionated choices about how to arrange the ButterFS volumes in a way that kind of defeated what I was trying to do, and I kind of got fed up. So you need to make Brad OS. I guess so. That's how you do this. I guess so. I really should just run through Linux from scratch, but I'm pretty comfortable with all this kind of manual partitioning and stuff at this point. Okay, so I had to do the manual debian install which meant that i had to like choose how to configure my network and choose you know make the uefi boot entries myself like you're kind of picking everything there are as i've said many times there's like six ways for example to configure your network interface in linux so you kind of just need to pick one and go with it yeah and that's a like no wrong choices situation pretty much like you can't you yeah well you know this is a case where i would i would say for example the debian handbook makes some recommendations about like how are you using this if you're on desktop you should probably use network manager if you're doing a server systemd network d might be better that's what i'm using or that in that case i think they describe it as the modern headless network configuration um so some people really don't like systemd stuff brad i don't know if you know that but that's a different topic for a different day yeah it is okay um and then like i said i i'd have to maintain the like the boot entries myself I we've talked about it before. I use a ZFS specific bootloader that maintains compatibility to make sure that there's a wait. There's a whole secure boot there too, or are you not? Yes, I am doing secure boot there, which means that I have to, so I generated my own secure boot keys that I enrolled into the UEFI and I have to, so I have to sign. It's called ZFS boot menu is the bootloader I'm using. And yes, I have to, I have to sign every new version of that to make sure we do that manually. Do you have hooks that do that when a new kernel gets pulled down? ZFS boot menu. I do that manually. Oh, God. But I set up DKMS, which is the... If we go again, did you know there are multiple subsystems in Linux for building kernel modules? Of course there are. There's DKMS, which like Debian uses, I think Arch uses. But then there's AKMods, which is used by the Red Hat style distros. Anyway, those things can be... DKMS in particular can definitely be configured automatically with your keys. Yeah. So anytime it builds a new ZFS module, for example, I have it set up to just sign it with my key automatically, and that's fine. I don't have to touch any of that stuff, but the thing I was going to say real quick, and I'm making it more complicated on myself by running ZFS root, to be clear. Yeah, because theoretically, if you had installed on a ButterFS or EX3 or something, you'd set up the machine, do all this stuff, and then you'd flip on the ZFS side. You wouldn't have to do any of this business. I could totally be doing like an MD raid and X4 setup for my boot volume, which is still mirrored and redundant, but using like stuff that's in the kernel and not have to worry about it. If you were doing this again, would you do that or would you do the ZFS thing? I've thought about moving to it. The reason I clung to ZFS is because I'm already using it to manage all the other volumes in this system. And I kind of just wanted to keep it all to one style of drive management for everything. but I have thought about doing that before and that's why I was so bummed that bcache FS got pulled out of the Linux kernel recently because I was thinking maybe one day I'll move my boot volume to that because that is in the kernel but now it's not. Scene drama. Yes, that's definitely some scene drama. To give a very quick illustration of how this can get complicated because ZFS is outside of the kernel, they have to track kernel versions meaning like every version of ZFS they put out is compatible with only a range of kernels, typically up to the newest kernel because they are pretty good about staying up to date but occasionally the point is occasionally a linux kernel will ship before zfs is ready to be compatible with it so it is possible and this has happened to me before it is possible to update your kernel beyond the point where the kernel can work with the file system you've got your operating system on so then how do you fix that you ch you boot off of a thumb drive and ch root into your system and roll back a snapshot okay is typically how i've done it now do you have snapshots set up for your ZFS snapshots set up that are integrated with your bootloaders you can just change back to the previous one so I think maybe that's a good way to focus this down so I don't ramble forever here is to kind of go through the things TrueNAS does for you and sort of talk about my equivalent yeah because the nice thing about TrueNAS is if you had a bad update you literally on the boot menu could just pull down the last like five states of the machine however many snapshots you had and roll back to how it was yesterday yes that's one of the things TrueNAS manages for you very well is is doing auto snapshots. You do have to tell it to do that for some drives, but like the system drive in TrueNAS, it just does snapshots every time you update. But yes, that's one of the things I had set up on TrueNAS was just having it snapshot. Every pool in the system gets a snapshot every hour, every day, every week, et cetera. And there's policy for how often to call those. Those don't use any extra space, to be clear. That's kind of how copy-on-write file systems work. Anyway, I'm using a tool called Sanoid. S-A-N-O-I-D which has been around for quite a while. It was written by Jim Salter who is a sometime contributor to Ars Technica. I don't know if he still writes for them or not. I don't know. I've always tried to avoid the NOID. That's my understanding. But it seems like you've embraced the NOID. Anyway, Sanoid is the most robust tool I've seen out there for ZFS auto snapshots. It's command line only. It is packaged for some distros, but I just went and installed the deb. In fact, I think you build your own deb from the GitHub repo and then install that. And it triggers a snapshot anytime you pull a package down from the Debian repository or something? Yeah, yeah. It's all policy-driven. It's probably worth noting here, in my setup or a setup like this, unless you set up some kind of web UI, you were just configuring everything from a terminal in text files and config files. But yes, it's policy-driven where you can tell it like snapshot this pool every hour, keep 48 hourly snapshots, etc. By pool, you can configure that stuff. Stanoid has an optional subcomponent called Syncoid. Oh. Syncoid. For replicating the snapshots that it's auto-snapshotting to other volumes. Oh, across the network. Or locally. It'll basically use any path over SSH or locally so you could have like usb drives plugged in to replicate to if you wanted to or in my case i swear to god i'm gonna finish setting this up one day it's the raspberry pi down the hall with the the four drive enclosure bolted to it i'm gonna say like that's one of the things that was one of the appealing features for me on zfs when i first started down this path is that i could like set up another machine that's plugged into at the time the with the tested office right And I just have an off-site backup of my stuff that happens automatically. Yes. So I don't bother with that anymore. Yeah, I mean, like all of this, it is a lot of work. But the workflow, typically the workflow that I'm trying to achieve here is I never think about any of this stuff once it's set up. I just use the server as is. You know, I put things on there, like podcast recordings, go to it, whatever. At night, I've got it, or theoretically, I will have it set up so that every night it just syncs whatever that day's additions or changes are to the volume. down the hall. And I basically never touch any of it unless I need a backup. Right. That's the goal. Anyway, I've also got the snapshots on the NAS set up as what used to be called shadow copies and windows. I think they've changed the name. Shadow volume copies. Maybe that's it. Effectively, effectively, I can get properties on any network mounted folder and it'll show me a previous versions list and I can just roll back to that actually saved my ass a couple weeks ago when I deleted some old Ramblecast recordings to clear space and then like that one is on a six hour call schedule so like two hours after I deleted them thankfully I realized I needed something from one of those old recordings and I was able to go into those into the previous version through the Windows File Explorer and pull that file out before it truly got eradicated. That's good. And now how am I sharing these things to Windows? Yeah I was going to say like what services are you running on this? That's the big question. I'll kind of keep going down the line here. Samba is SMB is what I'm using to share the volumes on this thing over the network. There are other options. There's like NFS, which is a more traditionally Unix-y way to share files. So NFX is, yeah, I use NFS for like Linux to Linux talking, but not so much for Windows to Linux talking. That's exactly why is that Windows support for NFS is very ordinary. Yes, bad. It's garbage. Bad, isn't it? Sure. That's a great, that's a fine way to describe it. Well, so the nice thing about NFS is it preserves permissions and it lets you map like a user on one machine to a user on another machine or use active directory or something like that to talk across both of them. Not active directory, but like a shared, a shared source of truth for which user is which user. Yeah. In a more like natively Unix way, right? Like you can do that stuff with Samba as well, but it's much more fiddly and arcane. I find with Samba usually crush down the permissions to be like the least common denominator that everybody needs to be able to do the thing you want to do. Yeah, it's it's kind of a mess. Anyway, not a lot to say about Samba. Like you, I was using Samba or SMB in true NAS as well. And again, I do it because Windows is good with it. Mac OS is also good with it. So I can mount those volumes for my MacBook as well. It's funny. I often use AFP on my Linux machines to talk to the NAS when I'm on desktop, because for whatever reason like nautilus and all the file managers that i've used in desktop linux almost always see the afp shares on my synology nas before they see the the wind the smb shares interesting um so yeah i i don't know why that is it's just a weird like side effect of opening up the network browser and being like oh okay so there's two things here it's the same machine both but this one always pops up faster so i use that um uh what was i gonna say about oh i i've got all the advertisement turned off for that stuff so i always just mount stuff by ip yeah i have other people using my computer yeah yeah i don't i don't have that concern so samba not a lot to say there you you configure it by managing smb.conf like it's relatively straightforward although there's like a ton of complexity to advance samba configurations we don't need to get into yep uh okay services and containerization yeah is like probably the thing that I started to say probably the thing that was the most problematic on TrueNAS. I don't think that's quite true, but it was the thing that on BSD you only had jails for. Well, and then they also had plugins, but plugins were a little fraught. Yeah, I never even touched the plugin stuff because people were like, oh, their Plex plugin is super outdated or doesn't work right or blah, blah, blah. Yeah, the bigger problem with the plugins was that they would always have some sort of weird limitation. So if you were doing something that was like that had relatively low overhead and relatively low interconnection with other services, it was usually fine. But anytime you wanted to do anything that went beyond like the default, then it was a problem. Like, for example, the Plex plugin was great. Figuring out how to get your Plex database out of that Plex plugin, the Plex database contains like the list of movies, the number of times they've been watched. the users that are allowed to attach to your server all that kind of stuff and a giant cache of all the like the art and the the art that gets displayed in plex around each movie or tv show and figure out how to get that data out of the plex plugin on free desk was a nightmare i bet man i mean it's not easy to get it out of a manual install no it sucks like if you've ever migrated plex from one operating system to another like their support page for doing so is like 20 steps long yeah and you have to be kind of careful about it have you done it yeah i've done it three times it sucks every time they are very specific about like okay like before you log into the plex on the new machine you have to log out of the one on the old machine make sure you don't clear the trash on that one though we copy these files over like do this do this like it's it's weirdly sort of um fragile it feels like so i have that set up now so that my plex database is just as a folder a network share on the nas and um when i connect to it i have to reverse engineer how i did that every single time because it's a huge pain in the butt it sure is uh okay so um my understanding is since true nas has moved to linux i mean obviously kind of like me moving to bear linux that has given true nas users like way more options for the types of sort of containers and plugins they can use are they still using jails uh well no not on linux that's a boy that's a big topic so well okay so for folks who don't know jails basically the short, short version of what a BSD jail is. We've talked about these before. It's really cool. It's my favorite thing about BSD. It's the reason I use TreeNest for such a long time. It basically lets you spin up a virtual, what looks like a virtual machine on your network. That's essentially, it has its own IP address. So you can give it its own like DNS entries, stuff like that. And it takes the system files from your main OS and lets you use them as if they were in a virtual machine without the overhead of a virtual machine. Yep. It's kind of the short, short version. So the reason for that is because they're running on the same kernel as the rest of the machine is. So they don't use extra memory or there's there's you're not you're not siloing resources off to them that only they can access. Yeah, that's that's the big one is that you're not it's not like running a virtual machine where you say, oh, OK, I have four cores to a course total on this machine. I'm going to devote two cores entirely to this VM. And then the main OS won't be able to use them anymore. it let you resource share much more effectively. It's a trade-off of lower resource overhead for less security because of VM is much more sandboxed and less is much more impenetrable than... It also means that you can access the file system a little bit easier. Basically, you'd say, okay, I want these sub-volumes of your ZFS array to be accessed by the jails and for a corporate user who's really worried about security, maybe not the best solution for a home person who wants to make it easy for their Plex server to see their movies. Piece of cake. Yeah, so maybe I'll go in order of how I came to these things chronologically because that might make it a little easier to explain and build on what we just talked about. So in FreeBSD, I would say like jails are extremely secure because they are built in at the kernel level. Like they are what people would refer to as a first class feature of the operating system, meaning they've, you know, fully integrated into the kernel, like fully security audited. like I don't think a jail breach has ever been demonstrated before. They've been around for a really long time and they're battle tested. 20 something years or whatever. Linux does not have an equivalent at that level. Built into the kernel. Yes, like the Linux kernel itself does not expose a comprehensive containerization pattern or whatever you want to call it in the way that FreeBSD does. Now there have been Linux containers for like 15 years that are built out of first class Linux kernel features like I'm getting into stuff I can kind of only somewhat understand like namespacing and control groups which are well explicit from a practical level the solution that solves the same problem in a different way on Linux is Docker right well yeah I mean that's what I'm getting to and that is largely what I have moved to but I'm like I said I'm going in order from I kind of oh when I when I got to the Linux NAS I basically was trying to sort of replicate everything that true NAS did yeah it's a bad idea one-to-one well it was fine it worked fine for a while for a while oh it Dude, it always would have worked fine. It was not like it was broken. It was just that the management was ultimately much like on BSD. Effectively, you're managing a whole little OS for every service you're running. I would argue that onerous management is not working fine. Yes, yes. I mean, that's kind of where I got to eventually, for sure. So the classic Linux container technology is LXC, or I guess Lexi maybe is how you're supposed to pronounce that. But it is three letters LXC. like that that is the old school linux container technology that was kind of derived as a bsd jail equivalent out of kernel features in linux i use that for a long time like years maybe after i moved to linux like the tooling around that stuff is very bare bones it was a huge pain in the ass to administrate running running those containers in a in a secure way like a rootless way because i wanted to not run those containers as root yeah because if you're going to be running services that are exposed to the outside world that's a vector for that then you're giving those services root access to your machine which was bad like you don't anything you want anything that's running pointing to the outside internet in my book you don't want running as any kind of privileged user because if those things are compromised they get a lot of access to your machine all of a sudden um without getting into too much detail the lxc management was extremely fiddly the command line tools were super bare bones you kind of had to figure everything out yourself. Running those containers rootless was a lot, although it did help me quite a bit for where I ended up, which is after a couple years of that, I got tired of basically running, like I was literally having to update a whole different Ubuntu install for every service I was running. I was going to say, the thing I like about this is that using the LTS versions of, I'm using an LTS version of Ubuntu, not Debian, but it means I don have to update it very often The updates are relatively infrequent They generally less scary than like a rolling distro like I running on my desktop And for things that I use all the time and are important that they not break, it's really nice to like log into that machine and be like, oh yeah, you don't have any packages to update. If I had to do that for five Linux machines, I would kill myself. I think at some point I wrote a little tiny little wrapper script. I mean, we're talking like three lines or something to just update all of them at once, ballsy. Not difficult. I should point out one cool thing though about that setup is on FreeBSD the jails are just another jail of the same FreeBSD host that you're running on. With the Linux stuff, because there's so many different distros, I could be running Debian stable as the host but you can kind of pick any distro and any version of a distro that they offer an image for as your containers. So I was running like much newer Ubuntu server images as my container distros on top of you can totally do that it's fine and you were able to do things like pass through access to quicksync and stuff like that yeah I mean that that took work for sure that yeah like I had to learn more about Linux slash DRI yes yes exactly render render render D128 was getting passed in dude we could we could this could be a two hour D's what uh-huh we we could we could sit here for two hours talking about this stuff I had to get pretty well acquainted with the way Linux handles hardware devices as files to do that because again LXC is like it's well supported but it's not well tooled I guess is how I would put it like the tools again are not friendly at all you kind of have to figure everything out yourself so like documentation good? It's not nothing but it's not enough. Okay. Okay as a specific example the technique I ended up using for passing the quicksync device into my Plex container, which I did all the work for before I found out that Plex requires a Plex pass to use hardware transcoding, although I was also running Jellyfin, so it was at least useful for that. The solution that I ultimately used, which I was quite happy with for passing that QuickSync device into the container, is something I just dredged up from a random LXC forums post from three years before to be clear. When you ask, is the documentation good? The bare bones stuff is good, But beyond that, if you want to do anything advanced, you're digging through forms. Oh, great. Okay. I don't like that for what it's worth. Yeah. Yeah. It's a lot. So where'd you end up? Where am I at now? I have, I've mostly moved off of LXC. I'm not running the LXC tools as such anymore. At first, I can't remember what order I went in here, but I'll just talk about Docker. That's the thing everybody cares about here. I mean, that's, that's the right answer for this problem. It seems like most of the time. Except I'm not running Docker. What? I'm using Podman to run Docker images. Well, that's still running Docker. Yes, it's still running Docker images. The images are compatible. They're two different tools that run the same things. Yeah. Is Podman a portainer? No. That's the... Is that the lower overhead version of portainer? Is that a different thing? Not at all. It's totally separate. Podman is a... You could look at it as a different implementation of what Docker does. It's a different container backend. But you use like the same kind of Docker compose files or whatever? No. Oh, you could. There is there is a Podman compose tool that you can use effectively the same YAML files if you want it. But I think I want to say even the developers of Podman compose basically say like, hey, this is here. If you're coming from Docker and want to use the same style of YAML configuration you're used to. But. There are better ways to do that now. Podman came out of the Red Hat slash SystemD slash like that whole constellation of products and services. Okay. Let's say like it's the standard Linux thing of somebody comes up with a way of doing something and then somebody else says, you know, I see X, Y, and Z problem with that. I think I can do it better. Although, again, coming out of like the Red Hat world, it is very well robustly supported and developed. Like it's not this is not Podman is not a fly by night thing at all. Um, Podman and Docker are two different ways of doing the same thing, which is downloading images, quote unquote, of applications you want to run that are fully self-contained sandboxed bundles of like the binary, the core binary of the service, all the libraries that are required to run it, you know, kind of a. Well, yeah, the idea is that it gives you the, the machine state that you need to run the service that you want. Now, ironically, it's doing effectively the same thing and, in fact, using the same kernel technologies by and large as the LXE containers I was doing. It's just a friendlier interface on top of them? Well, no. What it is, it's just bundling up only exactly what that application needs to run unless it's a poorly made image and somebody puts way too much stuff in there. But by and large, well-authored images and people on the Linux channel on our server are definitely going to scream at me at some point in this conversation. That's fine. My understanding of it in the little bit I've looked at how you build up Docker images is like you really only want to put exactly as much as you need for the image to work and nothing more in there. Yeah, that's right. So like Docker images are also based on specific Linux distros typically, but they're so stripped down that they're not full Linux distros. My understanding is you could run a full Linux distro inside a Docker container if you wanted. Yeah, people run like you can run like a desktop. You can run a full desktop inside a Docker container if you want. But in fact, I wouldn't do that. What is, there is a distro box, I think, is the tool that I see recommended. There is a tool that can use either Docker or Podman as its backend that will let you run. Yes, it is distro box. You can try out other distros in a Docker slash Podman container type format. So that's yet another way to do the thing I was doing before. So what are you running in all these Podmans? It's typical media. I actually don't run that many. it's typical like media service stuff like plex jellyfin own tone but i've tone is your mp3 server well it's it's the airplay server it's the thing that um it's the thing that exposes to the network as an airplay like uh trying to like what are the products they make that can do this like in back in the day it would be like your itunes running on your back could would show up as a library to play from but these days it's more like what uh like well like music does that or i mean from the apple though like is does the home pod do that is that what the home pod is for you just ask it to play something and it plays that you don't have to think about it okay well at any rate own tone is a way to look like you have an apple like an itunes library on your because i have a lot of airplay speakers in the house so it's it's funny like the airplay speakers that you're talking about are not current air like not the airplay speakers from the old times the thing you're talking about is that itunes library sharing thing from like 2008 where as one day you turned on iTunes and all of a sudden you saw everybody else's iTunes in the office and then you could steal MP3s from them. Well, could you actually get the files? You know how to do it. Of course you could. Of course you could. Yeah, but okay, so real quick, I mean do we want to get into why I'm using Podman instead of Docker? No, it's fine. It's better? I mean, that's a big thorny question. Just say one way or the other. It's a yes or no, Brad. I like it better, but I've barely touched Docker. I have literally used Docker for about 30 minutes ever. Like, okay. So I just don't have a good frame of reference. But the reason my understanding is the reason Podman came to be is because the people who made Podman again, people in that Reddit or Reddit, Jesus, my brain is so poisoned. The Red Hat like system D kind of world looked at Docker and felt they were not Docker was not taking it seriously enough to let you run containers, not as roots. Because again, like we said, running services as root is dodgy from a security perspective. I'm questioning a lot of my choices right now, just for the record. Well, Docker has since, my understanding, finally belatedly got around to more or less fixing that problem. But I think I'm really speaking out of turn here. I think out of the box, it's still probably running everything as root unless you go out of your way to change that. But I don't quote me on that. I don't know about that. I don't actually pay attention to that usually. Anyway, very briefly, Podman is basically built from the ground up to let you run containers as unprivileged users, where Docker kind of, I think that had to be retrofitted on. Okay, that makes sense. Is basically the selling point. Podman also doesn't run a daemon, meaning like a monitoring process, supervising process in the background. Did you say daemon? Yes, I did. Don't you mean daemon? No, I don't. It says D-A-E-M-O-N right here. That's right. Look it up in the dictionary, man. Daemon. Not to appeal to authority here. Opposite of the night man. not to not to not to just appeal straight to the dictionary authority but um if you look it up it says archaic spelling of demon i don't know i feel about this but i'm gonna let i'm gonna let it go i'm gonna let i'm gonna let the chat handle this dude the free psd mascot is a devil yeah there's a reason for that i'm pretty sure i'm pretty sure anyway anyway whatever yeah podman doesn't run a like a supervisory process that oversees everything else yeah like like docker security stuff. So I run all of my containers, all my Podman containers under a kind of a nobody unprivileged user. Okay. And then we don't really need to get down this road too much, but I'm actually using a setting where the actual container processes inside the containers don't even run as that user. They run as a subordinate six-digit user ID, like ephemeral user ID. LXC does the same thing. This is actually like pretty standard Linux stuff. But so all of the like, like my Plex process and my own tone process are actually running as user one zero zero nine nine nine. How do they have access to the do you have to give that user access to your files in the file system? Okay. ACLs. Do you remember when we interviewed Jeremy Allison, I believe, was the Samba maintainer founder? Remember that? Remember him talking about ACLs? So I had always called some ACLs. He called them ACLs and explains a lot. you know he's been around a while he's got more authority in this domain than I do I always said them ACLs but ACLs are and this might be something that comes up on the bill boot and diaries at some point access control lists they're ways to give files other subordinate permissions to the main ownership permissions yeah it's like the advanced permission stuff in Windows NT yeah like Windows and NTFS has a similar in fact NTFS has ACLs like it's totally across it's anyway so i use acls to basically just give that give that six digit uid uh access to the directories it needs and nothing else oh see i just crunch everything down to user 1000 and share across the because i'm sharing across the network so i'm connecting across the network so i'm it gets really weird and complicated if you don't do that yeah i need to very briefly defend my kind of security maximalist position a this machine's got like my whole kind of digital life on it from college on be theoretically I am once again going to point it at the internet at some point like I you know I ran a Minecraft server and a little web server on it for a while those things are not currently running but I want to start using it for game servers and like a blog and stuff again at some points and I'm to be real like I'm just enough of a minor public figure and have had just enough contact with anonymous trolls in the past that oh I'm I'm pretty a little bit of paranoia i feel like it was a long way here that's why you have to vpn into my house to use my right servers now like so so like the point the point i'm making is the average person maybe does not need to be quite as stringent about containerization and and siloing things off as i as i am i think my solution for that problem is to just put everything on a different machine yeah right sure um yes that is that is it's not quite air gapping but that's certainly a more physical separation than having all the storage in the same machine that all the services are running on yeah because with my with the outward facing services running on so i have the two machines i have the the nas and then i have the little b-link uh with the alder lake seller on in it and um basically i do least viable permissions for each of the services so anything that lives outside my network doesn't get right permission to the nas and i just enforce that via the the smb shares or nfs shares whatever i use so even if one of the services gets compromised it still only has access to whatever network shares are mounted from there and not the entire system yes exactly yes indeed um okay so podman instead of docker for for application service containers i still do use lxc containers off and on not not as much they should have called those podman containers podcons just for the record yeah i think that's my or pod cans maybe i don't know yeah pod man yeah it sounds like a kind of a crappy b-tier marvel superhero kind of kind of yeah um so so i uh like i said i still use the occasional lxc like full distro yeah os container what i'm now using for that is a tool called incus i-n-c-u-s is that based on the demon incubus no it's a type of cloud. I believe the cumulonimbus incus. Okay. It's the anvil cloud, actually. Just look this up the other day. Do you know, have you seen those clouds that form and look like an anvil? No. You should Google. Yeah, cumulonimbus incus is the type of cloud. It's like, it's wild looking. I've never, certainly never seen one of these in person. Yeah, I've never seen one of these in person. I think I have read about them. It's like a type of cloud that like levels off on top and just looks like a literal anvil. It's kind of crazy. Phil plate occasionally posts pictures of clouds because he lives in Colorado where they have good cloud. Yes. Yeah. So what Incas is is actually a fork of LXD. Have you ever heard of that? Yeah, I think we talked about it a little while ago and I said LXD is what? Yes, you made a face at me. We sure did. LXD came out of Canonical, the Ubuntu people. Yeah. It's basically a more advanced way of managing both LXD containers and VMs, virtual machines. Okay. Actual virtual machines. it's just using QEMU to run virtual machines under the hood I'm sure you're familiar with that I'm going to ask a question just so I know so the audience knows who doesn't know hey Brad what's QEMU it's lower level virtual machine management technology at Linux that hooks into KVM which is the kernel the kernel virtual machine hypervisor okay that's cool I haven't actually used that I use bottles and stuff I don't use virtual machines I swear I've heard Adam talk about Adam talked about different ways to manage virtual machines a couple of times but yeah and some of those front ends I don't know if bottles is like this but some of those front ends that you're talking about on Dolby Diaries will also be something that sits on top of something like QEMU in fact that's what Incus is like it's all it's doing Incus is just a slightly friendlier front end to both LXC and QEMU for VMs bottles is just an easier way to package up wine MVP for running line apps. That's a totally separate thing. Anyway, Canonical still makes LXD, but there was other scene drama, and the original LXD developers quit the company and forked it, and it became Incus. Okay. So, really, the only point of that is it's kind of nice to manage your system containers and VMs from the same common command line interface. It makes sense, yeah. The syntax is all the same. It's quite easy to manage. That's not even remotely the only way to do that exact thing, though. Stop me if you've heard this one before. I mean, look, there's an infinite number of ways to do pretty much everything in Linux. Yeah, there's stuff like LibVert that I think is probably actually more popular than Incas that does the same thing and a lot of other stuff at the same time. Anyway, now you're reaching the part of the list where it's like there's just a series of letters that don't that like, hey, here's the first four letters on the A row of the keyboard. And and then four letters that look like a word but aren't. Do you know how hard it was to get Google Docs to let me just type ASDF into this bullet point and not have it autocorrect something else? look i just the new google docs autocorrect blows they added ai to it and it's the worst thing ever notepad was auto correcting my typing this morning i had to dig into the settings for that and turn that off what are they doing i'm increasingly on the maybe i should just go back to one of the simple note clones or like an obsidian branch or one of those we should do i i installed the obsidian app i have not like spent a lot of time on that but i i want to start i need to get off of simple note and want to start syncing text files to something i control and so i yeah i need to put some time into like self-hosting or putting it in my vps or something the problem is i've started i've been using notion now for like three years and the the wiki structure of that is it turns out really good for like complex branching many faceted notes like i like to make yeah and obsidian just can't obsidian is always like one layer it sucks that's like a proprietary notion feature right pretty much or you can't you can't just port that to some other application right i'm sure like you could but i'm sure somebody's done it but it's not in like mainline obsidian for sure sure okay so very briefly here i use a tool called asdf which is a is a multi runtime manager this is something that if you're if you're running a server if you're running like a big nas like i'm doing and it's just linux like this might be something that would come in handy for you um what it is is a way of installing runtimes for kind of every programming language you can think of Python, Rust, Ruby, Go, a bajillion others. There's another tool out there now. Why not just do that in the system packages? So that's something you will discover if you start tinkering with that stuff quickly in your Linux distro is that you really do not want to mess around with the system Python, the system X, Y, or Z, the system Rust, the system. It might vary by language, but Python in particular is one where you will constantly. In fact, I think PIP when you try to install Python packages with PIP with the system installed version of Python, it'll straight up warn you like, hey, you should not be installing random packages into your system Python. You should use some kind of runtime manager. Because the different versions of Python aren't compatible with each other is the main reason. I'm not well versed enough to know exactly what all the risks are there. But anyway, I use ASDF. There's another tool that I think is getting more popular now called MISE. m-i-s-e okay as in mise en place oh no yes look man i get it look naming stuff's hard i'm not going to judge naming an open source project is probably pretty difficult but i would probably be tempted to just name it like thing d at this point i like what whatever it is d look i have a lot of the respect for the person who's like fuck it i'm just going to call it asdf because that's the first four letters my fingers hit on the keyboard i think it is kind of amazing that nobody ever called a piece of software asdf until now yeah it's it's it's like yeah man that's pretty good okay it's pretty good anyway i that's something that might be something to look at um if you if you're going to run a bunch of little random services like i do in different languages like it's it's nice to be able to just be like hey i just want to install a ruby runtime and then use the the gem package manager that comes with ruby to install this little utility typically when i want to do something like that i typically just grab a docker container and do it in that yeah because that that handles the the business for me like the benefit of using docker i think over i don't know how what the what the kind of um like how how many people are building packages for podman but like the you go to no so actually podman images are the same as docker images there's no distinction there in terms of creating you don't make a docker image on a podman image i probably should have specified that podman is actually a compound line compatible where they especially when it was new they were just straight up like you can just alias podman to docker if you want and keep running the same docker commands you've been running it uses the same images runs the same commands it's it's just a different back end but oh but in terms of the user experience pretty much the same thing so if you go to like one of the docker one of the sites that has like big lists of docker images that people have built you just grab the wow that's amazing half half of half of the Podman, or half of the images I'm running with Podman are straight off of Docker.io. Oh, that's rad. They're all the exact same images. Actually, sorry, there's one other detail about Podman that you will appreciate that I really do need to mention, which I think is the reason to use it now, besides the rootless stuff, which is that the more modern versions of Podman integrate very directly with SystemD. Oh! So, I don't actually touch the Podman command line interface at all anymore. So you just started from a SystemD, you make a SystemD file that launches them? if you're familiar with what systemd unit files look like, like a .service file, .timer, .mount, there's a million types of systemd units, but you write a .container file with the same syntax that you would write a service file with. Oh, that's wild. And then it auto-generates a service off of that .container file. That runs the Docker image. So you never even touch the Podman thing at all, like the command line stuff, tools at all. You just start and stop those services like you would every other systemd service wow so that's that's like that's at some point in this year i hope that wildcat lake uh cpus will launch and computers will still be inexpensive enough that i can afford to buy one of them and i'm going to move my little b-link server over to one of those and rebuild everything from scratch and and that my initial thought was to just do everything on nix but maybe it's easier to do it on like debian lts and uh and then do everything with podman in systemd. That's neat. Docker is still totally viable, to be clear. If you'd rather use Docker, it's certainly... I happen to like... What I really like is using one interface for everything. In fact, kind of a theme here. I like using ZFS for every volume in the system. It's nice to use Incas for both LXC and VMs. By the same token, I'm running other custom systemd services that are not podman slash docker images. Some of those are just like... Like Locky is just a Go binary that I download from the repo and just run directly via a systemd service, right? So it's nice to just like manage all of your system services with the same kind of general paradigm is the other reason. How do you back this up, Brad? Snapshots? How do you back it up, though? Snapshots? Well, it's like I said, I still have not gotten that backup solution. Oh, the second machine in the other room. Okay. Everything that matters, like all the Podman config lives on a specific ZFS dataset, for example, and that gets auto-snapshotted. But yes, I do need to be replicating this. You should at least save that to Dropbox or something. Yeah. It would be a bummer if that drive died. That's why it's mirrored, though. Yeah. You know, a mirror would have to fall on that computer to probably take out. Yeah, or like a bug that wiped out the ZFS. Yeah, sure. It's on the list, I promise. Cosmic rays. I promise the nightly down the hall replication thing is on the list. But anyway, okay. That's kind of the software stack. This is all living in the server, just to be clear. Yeah, that's all on this big machine. That's pretty much most all of the stuff that TrueNAS would be doing for me. There's one last thing I should mention, which I think this probably went without saying, but just to reiterate, like there's no web UI here. This is all SSH. This is all command line. Like it's kind of on you to interface with it and kind of get your head around it. Um, yeah, I mean, I'm going to go and tell you, I've, I have been on both sides of this over the years and spending a few months learning how to use desktop Linux has made me much less like, like the, the design of the command line interface, especially if you're using system D stuff is it's, it's initially nonsensical, but once you understand the logic behind it, it makes a lot of sense. And it's kind of, it's pretty straightforward in a way that I, I wouldn't have imagined I would be saying six months ago. Yeah. so like I don't I wouldn't be afraid of that if you have time to learn and if I would actually especially I would say like modern I don't know what this philosophy is called but modern command line stuff like podman system D like they all use this kind of like command verb structure now that I'm sure you've seen you know like it's it's all like system system CTL is kind of the main like system D thing for interacting with the system it's like system to CTL start system CTL stops status or like podman image pull you know it's it's all very like verbal now this is the neary the neary commands are like that too it's like if you want to if you want to find out about something you use neary message if you want to change something you use mary start neary start or whatever kind of cli like design philosophy has become much more like verbal and kind of human readable in recent years in a way that is a lot easier to get your head around yeah there's a there's a lot less like hey you gotta do two dashes and then a capital m and then uh space and then equals and then yeah it's much more human definitely less arcane the Last thing I'll mention, I've got a PyKVM hooked up to that machine for actual remote admin if I really need it when I'm out of the house and need to get into the BIOS of that machine or something. I also have a serial console running out of that machine. I love a serial console. Yeah. I had to go right in there and look up the, what is it, the UART, I guess. Does it have a serial port on the motherboard or do you have to use a USB dongle? It's got a header. My motherboard has a com port header that I run to a serial connection. Wild. good old good old d is de9 right it's not db now i think it's i think it's db9 i think it's de9 i don't remember actually anyway it's one of those that i uh i run that out to the pykvm but it's um and and that gives you terminal terminal raw terminal in your pykvm so pykvm for people who don know is a raspberry pi hooked up to a capture thing yeah video capture yeah video capture that lets you basically get 1080p video uh in a remote box on another machine there are like a ton of cheaper options for this now like the pi kvm seems like they kind of exploded a market of cheap uh ip kvms that are like you can get them for like 100 bucks or less now that do basically the same thing well they also have things like uh jumpers that you can plug into the power switch on the on the motherboard stuff like that. So, so yes, the, with the serial console, I have, I have both video output out of the server and serial like text output of basically effectively the same thing. Can you do text input through the serial console or is it okay? Yeah. So I can SSH into the PyKVM and then I run a screen session. That's another thing I made a little systemd service for is that GNU screen, which is like TMUX. It's another one of those terminal multiplexers. But the nice thing about screen is it can open serial connections. It has built-in support for that, so I just have a little service that starts up at boot that connects to that serial output. And the cool thing there is that that persists reboots. Like, I've got a text log of whatever the console of that server was spitting out for as long as that screen session has been running on the PyKVM, which in a lot of cases is like weeks or months, right? And I can go into that buffer and save the whole thing out to file and have like a log of the last however many months of what the machine was doing. Just do a stream of that. Just put that out on Twitch. Yeah, sure. I don't use that very often, but if that machine crashes hard, it's nice to be able to go see what was going on in the serial output before it went down. It's incredible, yeah. It's a nice level of redundancy, but that's pretty much everything. Okay, so my stuff is a little bit... I mean, it's a little more complicated in some ways and a little simpler than others. We've talked about some of it already, so I I won't repeat. Sorry, I didn't mean to dominate most of the episode. You spend a lot more time with this. My theory on this is that I want to set something up and then not think about it until it breaks. See, this is where the philosophical divide happens. You put the work in to set it up, and then when you're done, you're like, alright, that's done. I'm going to go do something else. I put the same work in to get it set up, and then when that's done, I'm like, now what else can I fuck with? I go looking for... I've got the proverbial hammer looking for a nail where you're probably living a healthier more balanced life of other interests i don't know about that but i also have other i have a my my weekends are not really mine these days you know parenthood yeah um so uh i'm running the nas the the synology it is a ds1520 plus which is a celeron j4125 which is i don't know it's it's around all it's it's gemini lake i think is what the code name is for that but it's basically four cores it has decent quick sync it can do 4k trans codes um and it has 16 uh 16 that might have a xram i can't remember um the and that whole that's the bucket right that's the bucket what holds the data it has i think five five drawers i have three drives in it usually oh really uh yeah i don't have all the bays filled interesting the you know that look man electricity not cheap and each of those drives cost money the thing that happened is i was looking at what drives cost today and for what i paid for those i think 10 or 12 gig drives i could get like 20 gig drives now so yeah yeah um drives do get cheaper pretty quickly over time what are you running a like a parody like one parody butter butter fs with one parody yeah on the sonology yeah butter oh i didn't know you could i thought i thought sonology you had to use like their they were they so they used to have their own thing they switched to butter around the time i bought this device i didn't know that i bought this particular model because it was one of the ones that they were like oh yeah you can use butterfs on this oh okay um so uh basically i run heavy stuff on the b-link uh the b-link is a intel n5105 which is one of the one of the low power lake chips it's uh um about five watts at idle on that thing which is which is nice like jasper lake yeah that's a jasper lake interesting uh it's for it's it's a low power for always on now the bad thing about that particular CPU is that it's starting to reach end of support in most distros for the transcode stuff. So I'll have to either back grab back ports from the repos for the transcoding to work or just stay on the LTS version a little bit longer than I maybe would naturally. You know what you can do and I have run into this before if like if the distro you're on stops shipping the firmware files like the .bin files that you need to make the quick sync device keep working you can actually just go to the kernel tree like like the kernel.org tree you can go to kernel.org and find the right dot bin firmware files for your for the quick sync on that thing you literally just copy those into the relevant directory and it just works and and the kernel will just find them and load them or you have to i think you have to do do like a mod probe setting you have to like tell it to load them or something but i can also use an older version of the kernel right uh yeah yes but i think you're you're better off just using the stock just telling it telling it hey here's these firmware files so you can that's wild you can totally it's weird the time i did that it felt wrong i was like this i feel like i'm messing with things i shouldn't be touching here realistically given the timing on the wildcat lake stuff and like i'm probably just gonna not update this machine into an os that doesn't support the hardware transcode on it oh i hope that stuff comes out soon it would be nice um so so yeah my strategy is to keep every everything lives in dot so on the blink everything lives in docker on the synology everything is uh it's it's a little bit more of a mess um i do have a big 20 gig external drive that i sometimes plug into the synology when i think about it to do like a backup of config and um and like the the important uh volumes in there so the stuff that i want to make sure i don't lose um so wait is the synology running any services at all or is it just storage this analogy has a couple of really light things like my ubiquity console runs on the synology because that's super light um i want to say i have a couple other small things but nothing like i i wouldn't run a game server on it because when i when i tried that it made the entire thing slow as hell yep um i don't there i have the option of putting a little ssd in there to serve as a cache and i haven't bothered because unlike you it's more of a dumping ground for me than a than a place that i work off of just because like i used to save like i said i used to save the the images and the video assets and stuff that i use on my obs profiles there and i found that running that over gigabit that was shared with everybody else in the house made obs really really wobbly in a way that i didn't like yeah i've found over time actually like there's some i lack the vocabulary like there's just some sort of like file system arcana at the very low level that just doesn't work well over a network share like yeah i finally gave up and stopped editing audacity projects directly off of the NAS because it would just like seize up at weird random times and when you dig into that stuff you find again it's like it's like such low level file system stuff that like i have no idea what i'm even reading about at that point it's like oh this doesn't support this type of indexing and blah blah blah and like that's why this application doesn't like that so like sometimes you have to throw in the towel and well try not to work off of the server for literally everything for me it was it was this is making things unpredictable and weird and annoying me and i don't like that so i stopped doing it yeah um i run so the synologies like i said basically just a bucket there's a couple of really lightweight things to run there um i thought about actually putting a pie hole uh a pie hole um image on that because that would be easy to do it's pretty light it doesn't really take a lot of ram or anything and that would give me a little bit more redundancy on on that which would be nice that's i think that's that's worth doing i think i think there's like ways to unify pihole configs across multiple nodes now yeah they're just text files so it'd be easy to move them across and i don't really touch it very much very often anymore i just have to update the gravity every once in a while um so i added uh the b link is the is where the magic happens for me and that's the like i said it's the n5105 i have my strategy there is to keep everything in docker and keep all of the data that's dynamic on the server on the, on the Synology so that I only have to back up the Synology and like the Docker configurations for the, for the different, for the five or six images that are on there. And that has worked. It has been unbelievably robust since I, since I set it up probably almost four years ago at this point. What are you using for the Docker configuration? Are you doing Docker compose? Well, so, you know, I've, this, this is a journey of me learning how to use Docker. So in In the beginning, I used Docker Compose and then I switched to Docker command lines. And then I installed Portainer a few years ago and just imported the existing Docker containers that are running into there and added the new ones through Portainer. So, you know, it's a mishmash, not the best, but it works. That's fun. The hard part of this was getting Docker, like there's two layers of abstraction between the permissions on the Docker containers. For that, like, so for example, when I'm adding my Plex files, my media files to the Plex library. I have to have a different, getting the permissions right inside the Docker container and inside the host for the Docker and on the Synology is always wonky and kind of a pain in the ass. Yeah, I can see that. And it usually involves some real bullshit, like making sure the user IDs on all three machines are the same. And when you do that, it magically works. And I just kind of was like, okay, that's good. Not going to think about this anymore. That's what that's going over. NFS. You said that's going over. NFS. Yeah. Um, the, let's see, the other thing about that is that, for example, most of that access is read only. So Plex only has read only access to the media collection, except for the one network share. That's where the Plex database is stored. And that does require right access for obvious reasons, um, both for the cash and, and the updating the database. Um, and I had to, I, I, if I recall, I think I did some real crimes to make that work, but I don't want to, I probably don't want to talk about that too much um i also run some game servers so i have a linux game server host uh app there that lets you spin up like satisfactory and counter-strike and quake and stuff like that okay we've talked about that a little bit before yeah i've used that before it works well it's it's fine it's not it's not as modern as like isn't there's like pterodactyl i think is one and there's another one people like as well i forget so what it's called i haven't tried pterodactyl since it was relatively new um it was a little bit heavy for that machine when i tried it like it was it felt slow it's got like a nice elaborate web ui and stuff right like like linux gsm is pretty old school it's basically just a bunch of like tmux sessions and scripts like bash scripts kind of bolting everything together like there's nothing fancy at all there yeah um on newer games it behaves differently like setting up a valheim server setting up satisfactory servers a little a little weirder in there um and then i have a thing i use a Minecraft server manager, cause the kiddo does a fair amount of minecrafting. And let's see, I can't remember what it's called and my session is timed out. So let me fix that. But it basically gives me a overview on like how you log in. Like I can set up, I can make a new server really easily. I can control who has access to it, all that stuff without having to even start Minecraft. So I can do it remotely or anywhere, which is quite nice. Is that for Java or Bedrock? It's for Java. We don't fool with Bedrock. Bedrock is for little kids. Wait, really? Yeah, because Java, you can do mods. Bedrock, you can't really do mods on. Bedrock's the one with the ray tracing, though. But Bedrock's the one that runs on the Switch, too. Yeah, I know. For babies. Yeah. It's called Crafty Container. And it lets you... You can also run Bedrock, I think, servers inside that as well now. Man, so you've really run a satisfactory server? on that b-link i'm gonna tell you it was great for one player okay i was yes i've i've heard that the the memory scaling requirements with the size of your satisfactory world are ferocious on that thing i think that the moment i added a second player to it it would probably have gone straight to hell yeah um just unless you were like in exactly the same chunk of the world yeah yeah i think i think i read they recommend like 24 gig of ram for like a pretty good size world yeah or something like that i was just like nope if i recall when i set that up i set it up on my desktop machine and then i migrated the configuration over to the little server yeah um and i didn't i didn't play it for super duper long it was it was like i said valheim ran great yeah that was really good i remember people on wow one two different times man that's not bad i valheim i remember being pretty modest like two to four gigabytes total or something like that it was light i gave it i gave it eight gigs of ram i'd like half the ram on the little server and it was fine um i bet that if you had hit plex at the same time you would have noticed though sure um and and so the way those minecraft servers are set up is that when i want to move them to the latest version when the kiddo is ready to switch to a new you know because they update minecraft a couple times you know pretty much once a month it seems like all i have to do is restart the portainer and it just downloads the latest binary and then if i want to use one of the older ones i can go in and manually change it, but otherwise it's pretty good. And then I also have Jellyfin and some other stuff running there that I don't actually run Jellyfin all the time because it's also, it was too heavy for this machine, I think. The experience I had running the server on my desktop PC with 16 cores and 64 gigs of RAM and the experience I had running the Jellyfin server on the 16 gigabyte four core machine were wildly different and pretty bad. How long has it been since you ran Jellyfin? uh midway through last year probably oh gosh i'm trying to remember when it was they they put out an update i think it was later last year they they basically did one of those classic open source or open source moves of hey this project is pretty old and has a huge amount of tech debt oh and we have not had enough contributors to deal with it until now like we're finally biting the bullet and like re-architecting a ton of the oh under the hood stuff that's been holding this project back so they had their huge release and it was like serious what did we talk about it on here. No, like the the upgrade like the upgrade notes were crazy. They were like they were like, make sure you back up your database before this, because if the upgrade fails, it will hose your library. That's bad. Depending on the complexity of your library, this could take several hours. So you might want to start the upgrade before you go to bed, like before you're to bed overnight and type stuff. Mine took like 30 seconds because again, you got a real computer. Well, and yeah, and I just don't have a gigantic library. This was like this was for people who had a lot of stuff in there. But well, so I have 10 or 15 terabytes of of dvds and blu-rays that's a good that's good yes i've seen your plex you've got a lot of stuff that might be but but the point is i don't i can't say for sure that those upgrades they're they're making help with the system requirements necessarily but they might i mean the problem i had honestly was that the initial media scan was going to take like four days and uh pulling all the hefty yeah pulling all the the metadata and stuff down was bad and slow it it was kind of a full database redesign i believe is what they did effectively like they modernized the database they claim so maybe not right away but a couple more versions from now it might be worth taking another look at i'll take a look i i like it i mean i paid for a lifetime plex pass a decade ago so yeah if i feel okay if i had a plex pass i probably would not use jellyfin as much but um and then i have a perforce server running because you can run a five seat perforce server without paying them anything and um i use it to keep like small projects i'm working on on there and it's it hooks into the nas so it has redundant data storage and like i said it's like everything else that's running in a docker container and then then the data is actually stored on the on the machine um on the on the nas um so then in addition to that i have like the home assistant yellow which we've done we did an episode about last year i'm not going to get super into that but that lives in a closet in the center of my house um over powered over powered over ethernet which remains in 2026 magical to me um that controls all the home automation stuff all the lights and all that and if you're looking for the thing that i spent time futzing with it's probably that because i can sit on my laptop in the living room and do it really easily sure um i also have the pie hole i'm down to the one because of the stuff we talked about a couple weeks ago on one of the episodes i the the catch-up episode i think um and then i have some upcoming stuff that I want to get into. I don't know that I'm going to really fool with adding anything new to that existing B-Link machine right now, just because it's on borrowed time when the Wildcat Lake machines come out. But I do want to have Snapper running on that just so I have kind of upgrade insurance. It's always a little hinky. The scary thing about the LTS distros is that they update so infrequently that each one feels like it's rife with peril like like you know i read the notes i do what they say but i feel like it's been long enough since the last time i did something i set this machine up that if it if something breaks it's going to be a real pain in the ass to reverse engineer i i actually i actually hosed my debian bookworm install on that server to tricksy and ended up just doing a wiping and doing a fresh tricksy install but i had a snapshot of first of all it wasn't the upgrade it was it was their upgrade instructions that I fucked up. Like I ran one of their like, oh, you should like clean unused like obsolete packages or something like that. And I ran their recommended arcane apt command and it broke the machine's ability to boot. Yep, that'll happen. Anyway, I was able to restore config from the snapshot pretty easily, but the point is like OS upgrades, distro upgrades are not without risk. Yeah, they're scary. The other thing is I want to add a reverse proxy so that I, because right now all of the subservices that run in docker on that b-link are on port 1539 or 1537 or 15548 same ip yeah it's all the same ip and it's it's it's fairly frustrating yeah i get it i know what you mean but also i mean i can tell you right now that jellyfin's web port is 8096 and own tones is 3689 because i i once thought as you did i ran i tried to i tried to run nginx as a reverse proxy for a little while to solve this exact problem. Yeah. And it went great, right? And it was awesome. You do it every day. It was so difficult to get things working right that I just gave up on it and went back to using port numbers for everything. So then maybe I need to post it with all the port numbers I stick on the side of my monitor. So that said, Nginx is like the manual, difficult way to do it. Oh, you did it the hard way? I'm shocked. Yes, yes. Traffic is another one. You're familiar with traffic, right? The K, right? Yeah, T-R-A-E-F-I-K is another reverse proxy that people on the Discord really like. And I believe that one handles a lot of these specific proxy configuration much more seamlessly for you. Okay, I'll give that a try. Especially if you're using Docker. I want the one for babies. I want the one that a baby can do. So if you're running traffic and Docker on the same machine, traffic can actually just plug straight into your Docker configuration and kind of set up proxy stuff without you even having to do anything. Oh. but I haven't messed with that I would be running it on a separate Raspberry Pi so I don't know if that would have helped me or not but point is there are other and caddy is another one since you're if you're just writing down I'm just taking notes 2D's or 1D 2D's C-A-D-D-Y caddy caddy and traffic are both apparently quite a bit easier than Nginx to configure it wasn't the reverse proxying that I had a problem with that worked fine the problem is that most modern web pages and like web interfaces for these services are way more complex than just proxying a single URL to another computer or another address. There's all this web socket business going on that I barely understand. So every service is slightly different in the configuration you need to get it to pass everything through. I would get the page would load through the proxy just fine, but say top UI bar was missing, or so-and-so controls were not working or didn't show up. You'd be like, hey, there's a thing in my screenshots of this on the website that shows you how to do it that just doesn't exist on mine it was it was yes it was just like they basically proxied but were kind of unusable it was it was a pain well and that and that's pretty much it for me though that's like that's what i got um i am excited about replacing that little b-link so i can run some heftier stuff i kind of wish i'd like i wish that there was a six or eight core low power equivalent like like i keep looking at those minis form boards that are like mobile processors from laptops so they they do the thing where they solder mobile processors onto like pc motherboards like itx motherboards or or whatever nobody ever did that with lunar lake did they uh lunar lake is kind of hard to get you can get those on a lunar but they're pretty expensive yeah i mean that platform is kind of more abundant anyway right yeah and i think honestly the where the 32 gigs is fine on my laptop i think i would want I dream of a world where RAM is inexpensive again. I can share more RAM than I need in a machine. So yeah, I would, maybe Phantom Lake though. Panther Lake. Panther Lake, yes. The last leaks about Wildcat Lake I saw are there's like a kind of a second gen refresh that is already showing up in like whatever these CPU. In the leakers. Yeah, whatever these like CPU testing databases are on the internet where like new SKUs of CPU just show up randomly. the point is there are there are cpus or skews at that tier that have way more cores is the word oh like kind of exactly what you're talking about basically at the wildcat lake tier but there's one with like i forget what it was like six p cores instead of two or something like that they showed one with six um with two p cores and four lpe cores that i think we talked about the other day at CES. The 2 and 4 is the sort of classic configuration that has been known for a while. Hang on, I'm actually clicking through right now. I can just tell you four P cores and four LPE cores it looks like is maybe what the refreshed design is looking like. Anyway. Yeah, I'd take that. I would love to see them broaden that product line a little bit because these little mini PC type boxes are rad and do all kinds of cool stuff now. They are like it it the pi the raspberry pies are neat and when you talk about like a pi 5 or something you're talking about actually a real capable computer yeah but they are arm and then you have to deal with the arm packages which like is doable but not everything is compiled for arm and i don't want to compile everything that i run especially when i'm running it on a pi i don't want to i especially don't want to have to compile it on the pi so i mean the bigger thing for me at this point is that the pies just keep getting more expensive. Yeah. The price range you're getting into now, it's getting to the point where one of these x86 boxes would just make more sense. I'm getting way more expandability and power for not all that much more money. I think I found that B-Link for like $125 on sale at some point. And that's a pretty that's pie 5 territory. Like a fully kitted out pie 5 is probably going to cost you more than that at this point. So anyway, so that's it. I hope hopefully you all enjoyed the home lab kind of check in where we're at. I'd love to know what you all are running and what you think we're doing. Right. If there's stuff you think we shouldn't be doing or stuff you think that we would benefit from knowing that, please post it in the discord. And if you aren't or send an email to tech body content on town. And if you aren't in the discord, you can get there by subscribing to the Patreon. We're a listener supported show, which means we're only here because of you, the listeners. very true uh you can go to patreon.com slash tech pod where for as little as five dollars a month uh that's like i don't know about you but at the starbucks by my house now that's less than one cup of coffee inflation as has uh come to coffee in pacifica hey their prices are going up ours are not yeah that's true that's true um yeah you can get access to the discord you get access to the monthly patron exclusive episodes where we just kind of chat about what's going on and and uh sometimes topics that are too small for the regular show. Uh, and, uh, the discord is full of bright and, and, uh, clever people doing weird projects and fun projects and stuff like that. I just got a package that has been talked about for, for a project that has been talked about by multiple people in the discord over the last couple of years. I'm not going to say what it is. I'm just gonna leave it hanging. I really want to know. I know. Well, there's only one way to know you have to go to patrion.com slash checkbox. Well, you don't have to, but everybody else does. Uh, we'll talk about it in an upcoming episode, but, uh, thanks everybody for supporting us as always. Thank you. A very special thank you to our executive producer, to your patrons, including Jason Lee and we felicitous rips, Andrew Slosky, Jordan Lippet, bunny zero, David Allen, James Kamek, and Pantheon makers of the HS three high speed 3d printer. You know, I need a 3d printer for this upcoming project. So I might have to reach out to Pantheon makers of the HS three high speed 3D printer and see if they can print something for me. I think you might have an avenue for doing so. Yeah, if only I knew. Thanks everybody for supporting the show. We do really appreciate you and hope you have a lovely lovely week. We will be back next week with another edition of the TechPod. I'll see you then Brad and always, as always, please consider the environment before you print this podcast.