The Cloud Pod

333: The Cloud Pod Goes Nano Banana

63 min
Dec 10, 20254 months ago
Listen to Episode
Summary

Episode 333 covers non-AWS cloud announcements from Google Cloud, Azure, and Anthropic ahead of AWS re:Invent 2025. The hosts discuss major AI model releases, infrastructure improvements, and cross-cloud connectivity solutions while deferring AWS coverage to next week.

Insights
  • Cross-cloud connectivity is becoming standardized and abstracted, reducing weeks of manual setup to minutes through managed services like AWS-Google Cloud Interconnect
  • AI model pricing is becoming more competitive and accessible, with Anthropic's Opus 4.5 achieving 66% cost reduction while maintaining frontier performance
  • Enterprise AI governance is now table-stakes, with platforms like Azure Foundry and Google's agent analytics addressing compliance and cost visibility concerns
  • Image generation quality improvements now enable iterative editing without complete regeneration, a critical UX improvement for production workflows
  • Cloud providers are rapidly closing feature gaps (Azure NAT Gateway v2 multi-AZ, scheduled tasks) that competitors solved years ago, indicating market consolidation pressure
Trends
AI model commoditization: Pricing competition intensifying as Claude, Gemini, and GPT models converge on capability while diverging on costMulti-cloud as operational reality: AWS-Google Cloud integration signals acceptance that enterprises will use multiple providers simultaneouslyAgent-to-agent communication: All three cloud providers now emphasizing AI agent orchestration and inter-agent protocols as core platform featuresObservability-first AI: Built-in instrumentation and analytics (BigQuery Agent Analytics, model routers) becoming standard rather than add-onInfrastructure sovereignty: Regional data residency (Turkey region, subsea cable diversity) driving investment in geographically distributed cloud infrastructureEnterprise AI governance consolidation: Unified platforms (Foundry, Vertex AI) replacing point solutions for compliance, cost, and security managementSubsea cable redundancy: Congestion on existing routes driving investment in alternative paths and diverse landing points for resilienceCost optimization automation: Scheduled VM shutdown, model routing, and commitment insurance reflecting enterprise focus on cloud spend controlOpen standards for cloud interoperability: GitHub-published API specs for cross-cloud connectivity indicating shift away from proprietary lock-inSecurity-first networking: TLS/TCP termination, mutual TLS, and threat intelligence feeds becoming baseline expectations in load balancers
Topics
Google Gemini 3 Pro image editing and text correction capabilitiesAnthropic Claude Opus 4.5 pricing and performance benchmarksAzure Application Gateway TLS/TCP protocol terminationAWS-Google Cloud cross-cloud networking and interconnectGoogle Cloud Turkey region and data sovereigntyBigQuery Agent Analytics for AI observabilityAzure Foundry model router and AI orchestrationAzure DNS threat intelligence feed integrationAzure NAT Gateway v2 multi-zone redundancySubsea cable infrastructure and congestion managementOpenAI competitive pressure from Gemini adoptionAzure API Management agent-to-agent API supportAzure App Service custom error pagesAzure scheduled actions for VM lifecycle managementMicrosoft Foundry general availability and governance
Companies
Google Cloud
Announced Gemini 3 Pro, BigQuery Agent Analytics, Turkey region, subsea cables, and Cortex AI integrations
Anthropic
Released Claude Opus 4.5 with 66% cost reduction and improved performance on software engineering benchmarks
Microsoft Azure
Launched multiple features including NAT Gateway v2, DNS threat intelligence, scheduled actions, and Foundry model ro...
OpenAI
Declared internal code red after Gemini gained 200M users in 3 months and outperformed ChatGPT on benchmarks
Amazon Web Services
Re:Invent conference happening during episode; coverage deferred to next week due to volume of announcements
Snowflake
Now offering Claude Opus 4.5 and Sonnet 4.5 models in general availability on Snowflake Cortex AI
Salesforce
CEO Mark Benoit publicly switched from ChatGPT to Gemini 3; using AWS-Google Cloud interconnect for data 360 platform
Adobe
Firefly integrated with Google's Gemini 3 Pro for production-grade creativity workflows
Figma
Enterprise integration with Gemini 3 Pro for design and product visualization workflows
Canva
Enterprise integration with Gemini 3 Pro for marketing asset generation at scale
Shopify
Using Gemini 3 Pro for product visualization and marketing asset generation
Wayfair
Using Gemini 3 Pro for product visualization and marketing asset generation
Klarna
Using Gemini 3 Pro for product visualization and marketing asset generation
Turkish Airlines
Committed customer for Google Cloud Turkey region for flight operations modernization
Garanti BVA
Financial services customer committed to Google Cloud Turkey region for core banking modernization
Yapi Credit Bank
Financial services customer committed to Google Cloud Turkey region for core banking systems
NORAD
Partnering with OpenAI to add AI-powered holiday tools to Santa tracking tradition
People
Sam Altman
OpenAI CEO issued internal code red memo directing company to refocus on ChatGPT improvements
Mark Benoit
Salesforce CEO publicly announced switching from ChatGPT to Google Gemini 3 after three years
Sundar Pichai
Google CEO mentioned in context of 2022 code red response to ChatGPT's rapid adoption
Quotes
"I've sort of felt like this has been the case the last years that OpenAI is sort of been losing its luster. I mean, at least for me, I've been on Claude now for over a year and I just love it."
Host (discussing AI model preferences)Mid-episode during OpenAI discussion
"I find myself... I started on ChatGPT and tried to use it after adopting Claude. And then I tried to go back every once in a while, especially when they or it's a new model, and I end up going back to one of the entropic models."
HostDuring competitive AI model discussion
"The biggest thing that's most important about this, you guys, is that when NanoBanana messes up the text, which it doesn't do as often, you can now edit it and actually edit it properly without generating a whole completely different image."
HostDiscussing Gemini 3 Pro image editing
"I don't control the output so claude can just all the models so like that yeah i know it is but it's always time before i mean like i have this indeterminate thing in the middle it's going to output something that i can't predict or know what it's going to be and you get to pay for that"
HostDiscussing token pricing unpredictability
"This is what I hate about it, is you have to pay for security, it's not built in."
HostDiscussing Azure security feature costs
Full Transcript
Welcome to the CloudPod, where the forecast is always cloudy. We talk weekly about all things AWS, GCP, and Azure. We are your hosts, Justin, Jonathan, Ryan, and Matthew. Episode 333, recorded for December 2nd, 2025. The CloudPod goes nano-banana. Good evening, Ryan and Matt. How are you guys doing? doing awesome post thanksgiving fun yeah who doesn't love post thanksgiving fun i mean the re-invent's happening underneath us right now and like it's been a crazy week of announcements we'll talk about next week but uh you know then it's been just other craziness of the world it's been weird i don't know like you guys feel like vibes are off in the world since thanksgiving like i don't know i don't know because i had definitely bad things to do this week or or whatever like the world's just in a weird vibe yeah no i concur it's it's you know there's the the holiday rush right and then there's just the the wrapping up everything by the end of the year kind of kind of press and then there's just kind of this general malaise seems i just feel like everyone's blah right now in life yeah maybe mercury's on retrograde i don't know that yeah stupid rocks floating in space yeah yeah i don't know it's just been weird i mean i came back from the thanksgiving and i'm just like i had hard to get into gear it's not motivated this week i don't know it's a tough one but uh i haven't i haven't sort of enjoying wanting to watch all that stuff i've yet to watch a single keynote so yeah i'm only behind by a few days yeah so yeah i'm in the same place you're in so i've got i've got homework to do before we record next week so thank god we didn't decide like just push the recording out to friday of the week of reinvent we've done it every year before and we're always like miserable because we're like oh it's a disaster we don't have time to research anything and to do anything and so we're just like they announced this thing and we're like what does it do I don't know so we thought we'll just cover all non-AWS news today that way we get that off the plate for next week and then we'll just hit all the Amazon news and they were very at a very busy pre-invent and a very busy reinvent so there's a lot to cover next week in a single show for them so it's pretty good and then hopefully after that everyone will start going on vacation for Christmas because of course now you're telling everyone in advance we'll have no excuse we don't do the homework for next show which will be i mean you don't do the homework anyways but i'll at least on the homework so i try i can carry you on so it's fine it's just a normal week here in the cloud pod yeah i was like i guess don't tell anyone one of the podcasts there's a there's a song and like at the end of their show it's like a jingle and like one of the lines is like john didn't do any research because marco and uh god what's that guy's name it's escaped my mind oh no uh whatever it well wouldn't let him yeah uh it's just it's funny oh well anyways all right let's get right into uh ai is how ml makes money and uh because uh we wanted to use the show title we had to keep this one in there we are usually not much to say about it but uh google's google's launching a nano banana pro which is gemini 3 pro image editing and general availability on vertex ai and google workspace with Gemini Enterprise support coming very soon. Gemini Enterprise support is what we called Agent Space three weeks ago, and now it's Enterprise, just because branding is hard. The model support up to 14 reference images for style consistency and generates 4K resolution outputs with multilingual text rendering capabilities. And the model includes Google search grounding for factual accuracy and generated infographics and diagrams, plus built-in SynthID watermarking for transparency. Copyright identification will be available at general availability under Google's shared responsibility framework. Enterprise integrations are live with Adobe Firefly, Photoshop, Canva, and Figma, nailing production-grade creativity workflows and major retailers including Klarna, Shopify, and Wayfair report using the model for product visualization and marketing asset generation at scale. Developers can access NanoBanana Pro through Vertex AI with provisioned throughput and pay-as-you-go pricing options plus advanced safety filters. So yeah, the biggest thing that's most important about this, you guys, is that when NanoBanana messes up the text, which it doesn't do as often, And you can now edit it and actually edit it properly without generating a whole completely different image. Because that used to happen to me all the time. It's like, oh, I love this image. But it screwed up the spelling. I would try, please re-spell that. And it's like, no, new image. And then it's wrong in a different way. Yeah. Yeah. That's been my experience. And I think maybe I've just been using regular Nano Banana, if there's a difference between pro and non-pro. Really, it's a higher resolution. The ability to do some of the 4K stuff. Oh, okay. but yeah yeah nano banana has been out for a few weeks now or a month maybe and yeah it's so much better i mean but even even chat gbt image generation has dramatically improved in the last six months yeah i really wish that uh and i i finally asked claude enough times where i got an answer for it like i noticed that in heavy revisions of of images over and over and over again that the the the fidelity would decrease and it it is you know i finally got an answer out of it which is it is literally doing a copy and making an image from a copy of a copy of a copy when it's doing those edits so it's it's kind of it was kind of an interesting sort of behind the scenes like the ai model itself was that a video or was that a just a blog post no it's just me asking the the stupid bot like why is why you suck and then i always wondered i always wondered how that worked too like i was like i don't you know like earlier today i was telling someone that Yeah, his memory is out of a goldfish. And he has a very distinctive facial features. I was like, I want a goldfish with his facial features. So I just put his photo in and said, I want a goldfish with these facial features. And I got it. And so then we all laughed. And it was funny. Because I troll all my employees. It's what I do. But so that's all good. All right. Moving on from images. Cloud Opus 4.5 is now generally available across Anthropics, API, apps, and all three major cloud platforms at $5. per million input tokens and $25 per million output tokens, representing a substantial price reduction that makes Opus-level capabilities more accessible. Developers can access it via the cloud Opus 4-5-2025-1101 model identifier, because that rolls off the tongue, and the model achieves state-of-the-art performance on software engineering benchmarks, scoring higher than any human candidate on Anthropix's internal performance engineering exam within a two-hour time limit. on swe bench verified it matches sonnet 4.5's best score using 76 fewer output tokens at medium effort and exceeds it by 4.3 which is points at higher effort while still using 48 fewer tokens anthropic introduces a new effort parameter in the api that lets developers control the trade-off between speed and capability allowing optimization for either minimal time and cost or maximum performance depending on the task requirements and this combines with new context management and memory capabilities to boost performance on agentic tasks by nearly 15 percent in their testing cloud code gains plan mode that builds user editable plan md files before execution and is now available in the desktop app for running multiple parallel sessions and the consumer app removed message limits for opus 4.5 through automatic content sonorization and cloud for chrome and cloud for excel expand to all max team and enterprise users the model demonstrates improved robustness against prompt injection attacks compared to other frontier models and described as the most robust a lot rustly aligned model anthropic has released to date congratulations i did was playing with this and i the way i play with claude today is i use bedrock because i got tired of paying claude and everyone else for things every day and so when i tried to use the last opus i racked up a 600 bill in one day and when i tried to do it yesterday and then i forgot about it and three days later i was like oh no i forgot about it on opus mode i went into the bill and it was only 75 so either i'm coding less which is not true or it is actually cheaper so congratulations on that it's actually the most important part of the whole announcement is the cheaper context you know input output tokens i do sort of find it interesting though that you know you pay for both the input and the output but i only control the input i don't control the output so claude can just all the models so like that yeah i know it is but it's always time before i mean like i have this indeterminate thing in the middle it's going to output something that i can't predict or know what it's going to be and you get to pay for that so it's just sort of funny now do budgeting for for my day job how much does he get output we don't know how much is he gonna charge you we don't know your cfo really loves you when you tell them that i mean at least at a larger scale you can sort of do trending and and and bucket but it is i know just from trying to manage my own like you know api balance or token balance impossible and it's just like i feel i'm constantly just doling it out like 20 at a time just because i'm so afraid i'm gonna like like just right when i cut my 600 bill well and that's why i you know i was doing that too and i was like every day i'm getting like four 20 transactions on my credit card it was silly uh-huh but yeah but then you do need to set up billing alerts if you're gonna do it in the cloud yep yeah definitely was so i'm definitely pleased that it's significantly cheaper to use but uh it feels a lot more efficient it's still not as cheap as sonnet 4.5 or sorry opus sorry sonnet opus is still a little bit more expensive than that but uh i wish they would get a nice model router so i could just wrap between them and it would make the decision which one's better for this coding question i have like with the anthropic api yeah i wish i wish cloud code would just kind of choose based on what it knows like oh i should use this one or issues that one based on the complexity of this code I'm trying to analyze. Well, guess what, guys? What's that? Because there's now Claude, Snowflake also has Claude. So, if you want to use Claude Opus 4.5 or Claude Sonnet 4.5 in general availability on Snowflake Cortex AI, it is available to you today. So, you're welcome. So now I know what we're going to do for the rest of the show. For every club. It's better than the blow horn. Just say it is. It is much better. I did figure out how to turn the down. So now that's when we do the blow horn. I probably can do it without blowing out all of our listeners and your ears. I was like, oh, there's a slider. Who knew? No. I think you've known for a while. You've just decided not to. I don't know. I think he's also learned that he could probably turn it up on us now, too. So be warned. You can do it either way. you know so open ai apparently is declaring code red as gemini has gained 200 million users in three months they basically this came from sam altman who's issued an internal code red memo to refocus the company on approving chat gpt after google's gemini 3 model topped the lma arena leaderboard and gained 200 million users in three months the directive delays planned features including advertising integration ai agents for health and shopping and the pulse personal assistant feature google's gemini 3 models released in mid-november has outperformed chat gpt on industry benchmarks tests and attracted high-profile users like Salesforce CEO Mark Benoit, who publicly announced switching from ChatGPT after three years. The model's performance represents a significant shift in the competitive landscape since OpenAI's initial ChatGPT launch in December 2022. The situation mirrors December 2022 when Google declared its own code right after ChatGPT's rapid adoption. The CEO, Sunderabhichai, resigning teams to develop the competing AI product that is now Gemini. OpenAI is implementing a daily call for teams responsible for ChatGPT improvements and encouraging temporary team transfers to address the competitive pressure and the company responses indicate they're maintaining market leadership and conversational AI requires continuous iteration even for established product with large user bases. I mean, I've sort of felt like this has been the case the last years that OpenAI is sort of been losing its luster. I mean, at least for me, I've been on Claude now for over a year and I just love it. And there's so many great features in Claude that I love now, like the fact that it can now look at all of our chat history and it can now replicate how I write, which is super helpful, so it doesn't sound like Gemini, which writes still very AI-esque wording on everything it generates. And OpenAI has definitely some things I still use and I still enjoy as well. I cannot pay for all three of them because why not? It's a tax write-off. I do compare them quite often all the time, and I always typically either go, even in my coding stuff, it's either Gemini because of the big context window when something's really sticky or it's Claude and it's not OpenAI. But I do use OpenAI more for image generation than I use Gemini. Claude, if it had image generation, would be insane, I'm sure. And it just doesn't have it. Because they don't care about that, which is fine. And I'm okay with that. Yeah, I find myself... I tried, you know, I started on ChatGPT and tried to use it after adopting Claude. And then I tried to go back every once in a while, especially when they or it's a new model, and I end up going back to one of the entropic models. So it's sort of like, you know, I can see why they're sort of declaring this code red, because I do think that, you know, they're struggling a little bit, and the model providers are sort of making up significant ground. I wonder how much of this is just going to be sort of like a hot potato where all of them are sort of doing this, you know, back and forth. over the years. Let's see. Yeah, I'm in the same boat as you. I feel like originally I used ChatGPT. I pretty much de facto, I start anything I'm doing with Claude. Every now and then I'll pop over to Copilot at my day job because that's our primary tool. And then I'm like, this isn't quite where I want to go. Though it does work a little bit better with scripting against Azure, weirdly. We clearly added a little bit of special sauce in there. all right well let's move on to uh aws oh wait no aws next week everybody but there's lots of cool things happening there lots of ai models lots of things i i know if we didn't care about predictions being in keynotes that there would definitely be some winners i just don't know if those predictions happen to be in keynotes or not so that'll be our debate for next week i'm sure i know one particular jonathan's unhappy about what's related to lambda because they announced it pre-show they did not announce it during the keynote so uh i thought he was already doing his victory dance on that that's fine no no he doesn't get a count he doesn't get a point it was it was announced on the saturday before so was it just weird that it was saturday it was weird it was like strange timing i was like why this come out today oh google cloud region has a new region coming to turkey as part of a two billion investment over 10 years. It's really great timing on this, by the way. You announced you're a Turkey region on Turkey Day. It's very nice. Very nice. Well done, Google. I see what you did there. The region targets three key verticals already committed as customers. Custom financial services like Garanti BVA, Yapi Credit Bank, modernizing core banking systems, airlines, the Turkish Airlines, improving flight operations, and the local presence addresses data residency requirements and provides low latency access for organizations that need to keep data within their national border. Technical capabilities include standard Google Cloud services for data analytics AI and cybersecurity with data encryption at rest and in transit granular access controls and threat detection systems meeting international security standards The announcement emphasizes digital sovereignty as the primary driver with government officials highlighting the importance of local infrastructure for maintaining control over national data while accessing hyperscale cloud capabilities No pricing details, no exact timing when this will be launched officially, but I assume probably sometime in 2027. Two years from now. I was doing that math in my head. That took me way too long. I guess it's a year plus, but yeah. I mean, it depends on how fast. I think these typically take multiple years, but Yeah, it's typically, I mean, you assume they've already done ground prep and they've already started probably construction before they even announce it, you know, three to four months in advance. So then, you know, they announced this thing and then typically 18 months after that, they finally can stand up services. And then it also depends on government regulations, import taxes, you know, all kinds of things. You know, like KSA took longer, I think. The ability of buying RAM at this point. GPUs, you know, all kinds of things that are factors and being able to build data center at scale. so google is launching bigquery agent analytics a new plugin for their agent development kit that streams ai agent interaction data directly to bigquery with a single line of code the plugin captures metrics like latency token consumption tool usage and user interactions in real time using the bigquery storage right api namely developers to analyze agent performance and optimize costs without complex instrumentation integration allows developers to leverage bigquery's advanced capabilities including generative ai functions vector search and embedding generation to perform sophisticated analysis on agent conversations. Teams can cluster similar interactions, identify failure patterns, and join agent data with business metrics like CSAT scores to measure real-world impact going beyond basic operational metrics to quality analysis. The plugin includes three core components, an ADK plugin that requires minimal code changes, a predefined optimized BigQuery schema for storing interaction data and low-cost streaming via the BigQuery Storage Write API, and developers maintain full control over what data gets streamed and and customize pre-processing. Currently available in preview for ADK users with support for other agent frameworks like LaneGraph coming very soon. And pricing follows standard BigQuery costs for storage and queries with StorageRite API offering cost-effective real-time streaming compared to traditional batch loading methods. Yeah, this is an interesting model for providing both the schema and the already instrumented integration. I feel like a lot of times with other types of development, you're sort of left to your own devices. And so this is kind of a neat thing if you're, as you're a developing agent, everyone is sort of instrumenting these things in odd ways. And it's very difficult to sort of compile the data in a way where you get usable queries out of it. So it's kind of an interesting concept. I like it. It does sound expensive. That's the only thing that I would worry about a little bit is you're not in control over what you're consuming. And it's in BigQuery, which isn't cheap in terms of queries and storage. But yeah. Yeah, who does have a problem? Who cares about this? None of this sounds cheap. AI, BigQuery, analytics, you name a word in this press release that is cheap, let me know. Yeah. It's all going to be expensive, but it just depends how your business uses it. And nothing is cheap anymore. That's the truth about this economy. uh google is announcing a taylai link a new subsea cable connecting australia and thailand via the indian ocean taking a western route around the sundew strait to avoid congestion from existing cable paths this cable extends the interlink system from the australia connect initiative and will directly connect to google's planned thailand cloud region and data center the project includes two new connectivity hubs in mandora western australia and south thailand providing diverse landing points away from existing cable concentrations in perth and enabling cable switching, content caching, and co-location capabilities. And Google is partnering with AIS for the South Thailand hub to leverage existing infrastructure. The Thaila link forms part of a broader Indian Ocean connectivity strategy, linking with previously announced hubs in the Maldives and Christmas Islands to create redundant paths to Australia, Southeast Asia, Africa, and the Middle East. The infrastructure supports the Thailand's digital economy transformation and is moving around. It's amazing about subsea cable congestion, that there's how many cables can be there that there's congestion i could get for availability reasons but the word congestion just sounds strange to me in that one yeah it was a little strange bit of wording there but yeah i mean it's yeah it's funny to think about and then i mean i was instantly terrified like oh we've already got enough subsea cables where we've got like congestion problems and and you know looking at the link like you know just looking at the picture the route is pretty close to the last subsea cable they just announced a few months ago. They said they're going to connect it to the Malaysia one, which was what that was a couple months ago, I think. The Diavaru cable? They do both route through Christmas Island and then out from there. It does make sense. Very fun. I assume that a lot of the original cables that go through Perth in that part of the world were, you know, laid a long time at the beginning of the internet. And they just, you know, maybe they're only a single strand or multiple, you know, not a single strand, but, you know, a smaller amount of concurrency through them. And they're probably just, you know, you've maximized what you can do through that amount of fiber. And so now you need new ones to address congestion, is my guess. That's the only thing I can think of what congestion could be. I mean, from looking at submarinercablemaps.com, which is where I always go, just when I completely get digressed, when we have these conversations. There's a lot of freaking cables in that area of the world. In that area, the diagram is pretty big. There's not a lot of points, right? But off Malaysia and Thailand and all that, there's a lot of cables that all kind of go through that one little area of the world. One drag anchor and they're going to have problems. Which is, I guess, why they're trying to diversify this. Yeah, that's a risk anywhere in the world. unfortunately. Yeah, but there looks like a ton of cables compared to most other places have a little bit extra room. It looks like it's probably by necessity, right? Well, you've got to go that way through the street. Yeah. That's the closest path, otherwise you essentially have to go around which is what this one's doing. It's going to go probably out to the Indian Ocean and then to Christmas Island. Kind of come down that way. well cloud opus 4.5 is now officially on vertex ai where's the clapping oh there we go in this particular announcement they do want to point out that vertex ai is a unified platform for deploying cloud with enterprise features including global endpoints for reduced latency provision throughput for dedicated capacity at fixed costs and prompt caching with flexible time to live up to one hour platform integrates with google's agent builder stack including the Open Agent Development Kit, Agent-to-Agent Protocol, and Fully Managed Agent Engine for moving multiple-step workflows from prototype to production. And so, yeah. There you go. Woo! Woo-hoo! If you are excited to go to Las Vegas and you're disappointed that you are not sitting there right now in a re-invent session or dinner or drinking because it is a little bit later in the evening, you can go to Vegas in April of 2026 for Google Cloud Next, which will be in april 22nd to the 24th they moved it off my birthday weekend which is nice and sad at the same time yeah because i was enjoying getting going to vegas for my birthday and then going to the conference uh that's okay this represents the standard price description for google's flagship annual conference at 999 dollars uh which is the early bird pricing and this will follow their record-breaking conference attendance in 2025 conference will focus of course heavily on ai agent development and implementation featuring interactive demos hackathons and workshop designed to help attendees build intelligent agents and next 26 will offer hands-on technical training through deep dive sessions keynotes and practical labs aimed at developers and technical practitioners and the event serves as a main networking hub for cloud practitioners who love google ryan do you love google justin do you love google i like parts of google love is strong i think with any cloud provider i always have a low hate relationship Amazon I don't know that I loved either I liked it greatly I hate Azure loathe entirely that's fine I have a hate relationship with Azure although there's now there's there's now a couple of cracks in that facade that I'm really starting to fall in love with some of their identity management which is funny Google has always had good IM stuff and their workspaces product and how they, you know, is not great in the UI. Like, they're terrible at UIs, just like Amazon is. But their APIs are so strong that you can kind of ignore their shitty AI. UI, sorry. I mean, I'll have to take your word for it. I haven't tried it with Workspace. But Azure seems to be the first, I mean, mostly for you know, a lot of the integration with Office 365 and Entra, like, they seem to be the first to really be leaning heavily into sort of temporary access patterns and really promoting that in a way that makes it easier to subscribe to than the other cloud providers. So you've always been able to sort of assume a temporary rule. Only if you pay for it. It is also very expensive. You know, at like, what, $8 or $10 a person, you know. It adds up real fast. Yeah. But it is something you get to reuse across Azure and Office 365, you know. So yeah, it is sort of that. But this is what I hate about it, is you have to pay for security, it's not built in. Yeah, that is true. Why do I have to pay 10x for front door in order to be able to hit a private endpoint? Or to have a WAF on it just because you don't like security? I don't know. Yeah. Because it's expensive to provide, I'm guessing. Or because it's security teams, they're always tool-happy and willing to pay it on one of those two. Yep. All right. VPC Flow Logs now support cloud VPN tunnels and VLAN attachments for cloud interconnect and cross-cloud interconnects, extending visibility beyond traditional VPC subnet traffics to hybrid and multi-cloud connections. This addresses a critical gap for organizations running cross-cloud network architectures. The VPC lacked detailed telemetry on traffic flowing between Google Cloud, on-premise infrastructure, and other cloud providers. The feature provides five tuple granularity logging, which means source, destination, IP, port, and protocol, with new gateway annotations that identify traffic direction and context to reporter and gateway object fields. And the flow analyzer integration eliminates the need for complex SQL queries, offering built-in analysis capabilities, including Gemini-powered natural language queries and in-context connectivity tests. Primary use cases include identifying elephant flows that can adjust specific tunnels or attachments, auditing shared VPC bandwidth consumption by service projects, and troubleshooting connectivity issues by verifying whether traffic reaches Google Cloud Gateway at all. The feature is available now for both new and existing deployments through console, CLI, API, and Terraform with Flow Analyzer providing no-cost analysis of logs stored in cloud logging. And this capability is particularly relevant for financial services, healthcare, and enterprises with strict compliance requirements who need comprehensive audit trails across cloud and hybrid networking traffic. Interesting, yeah, because it is like the controls say that you have to have logging, not what the logging is. And so very frequently it is sort of turn it on and sort of forget it. But I do think this is great, but it is sort of like they say the five tuple granularity will help you measure congestion, but I don't see them actually producing any sort of bandwidth or request size metrics. So it's sort of an interesting thing, but it's at least better than the nothing that we had before. So I'll take it. Kind of amazing that they didn't have some of this there already and how enterprises were okay with that, but I guarantee you every security and compliance team is telling their cloud team, hey, this is released. We need to enable this right now, which I'm sure is Orion saying to his company. no no i'm not because it's like i said like i'm i'm all for having visibility and being able to do forensic analysis on things but i am not into the business of just maintaining logs all the logs everywhere for the since the beginning of time and that's because you've dealt with the other side of ingesting those logs yeah because i didn't always start with security yeah exactly and you even now like you know much more involved in like you know providing etl pipelines and management and tiering of this data versus just sort of storing and incurring costs because it is just it just ends up being a ton of data that doesn't really end up being all that usable and so like i think that's why you know big corporations and organizations have been okay without it because you can sort of stitch together what you need by looking at both ends of the tunnel not tunnel itself but you know we'll see i think this is definitely there's definitely you know areas where this is going to be useful and nice there are a lot of cloud cost management tools out there but only our chair provides cloud commitment insurance it sounds fancy but it's really simple our chair gives you the cost savings of a one or three year aws savings plan with a commitment as short as 30 days if you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask, will you actually give me my money back? Our chair will. Click the link in the show notes to check them out on the AWS Marketplace. Google decided that they wanted to help out with some Amazon re-invent previews because they announced this last week. But AWS and Google Cloud are jointly engineered a multi-cloud networking solution that eliminates the need for manual physical infrastructure setup between their platforms. Customers can now provision dedicated bandwidth and established connectivity in minutes instead of weeks through either Cloud Console or the API. The solution uses AWS Interconnect Multi-Cloud and Google Cloud Cross-Cloud Interconnect with quad redundancy across physically separate facilities and max-site encryption between edge routers. Both providers publish open API specifications on GitHub for other cloud providers to adopt the same standard. and previously connecting aws to google cloud required customers to manually coordinate physical connections equipment and multiple teams over weeks or months this new managed service abstracts away physical connectivity network addressing and routing policy complexity into a cloud native experience salesforce is using this capability to connect their data 360 platform across clouds using pre-built capacity pools and familiar aws tooling collaboration presents a shift towards cloud provider interoperability through open standards rather than proprietary solutions and I do want you guys to check the weather do you see pigs flying or anything crazy? No but it seems awfully cold in hell so I'm surprised yeah yeah this is it is I mean it's great because this is definitely a pain point that I've had to endure before trying to get cross cloud connectivity has never been fun. No and all the circuit connectivity and stuff is a nightmare to get set up and the vendors you know connecting to Equinex and all those things And the big thing was that why academics is sort of selling the cloud exchange because the reality is they were hosting both sides anyway so they could just cross-connect them and now you don't even have to deal with that. It's just abstracted away from you which is kind of nice. It's super nice and that's definitely a huge advancement. Yeah, the other thing I've had to help people do is set up there's like providers essentially that you get like a direct connect like 5 megabit or 10 megabit direct connect to their MPLS network which then connected to all of them which again, it's just a level of complexity that wasn't needed so it's exciting to see that this is actually coming out there and hopefully the clouds play a little bit nicer together but watch your egress cost still, it's still not sure. Totally. Yeah, that doesn't change. Doesn't change. It just made it easier to provision the faucet but it's still going to pay through the notes for everything that goes through it. All right, moving on to Azure. Azure Application Gateway is now supporting TLS and TCP protocol termination at general availability, expanding it beyond its traditional HTTP and HTTPS load balancing capabilities. This allows customers to use Application Gateway for non-web workloads like database connections, message queuing systems, and other TCP-based applications that previously required separate load balancing solutions. Thank you for developing Network Load Balancers. It's only how many years late on this one? They had a different solution. It just wasn't nearly as good. It sort of bothers me that they just shove this into the application gateway. I'm like, why didn't you rebrand or do anything else? Because they already have Azure load balancers and something else and they rather just confuse people. Or just call it an Earth load balancer like everybody else. I don't know. Well, Amazon had classic load balancers. They moved away from that. you know you could just you could follow that model too yeah yeah that was yeah those were technically elbs originally then you had albs and then they're like whoa and even like terraforms like wait we actually called this thing an el alb then they just went to lbs with a type inside of it so yeah because they were like this is dumb we have the same thing for basically the same yeah it's like okay we got it but oh well see amazon's on the way to do six ways to do everything Yeah, and if you weren't excited about the fact that you can now terminate your TLS and TCP determination there, you can also make your life even harder by supporting mutual TLS pass-through via the Azure Application Gateway, allowing your backend applications to validate client certificates and authorization headers directly while still benefiting from web application firewall inspection. this addresses a specific compliance requirement where organizations need to end-to-end certificate validation but cannot terminate tls at the gateway layer feature enabled scenarios where back-end services must verify client nab through certificates for regulatory compliance or zero trust architectures particularly relevant for financial services health care and government workloads so i was doing something with clod and i was architecting an idea i had and it was making suggestions and then it said well to do this connection we probably will need mutual tls and I was like no no that's not going to happen what else you got for me and he's like well you could implement tail scale I'm like yes we're doing that that is pretty funny while I totally get MTLS and how useful it is it is always a pain to manage it is a pain to manage for sure I mean it works when you get it working but getting it working is stressful and yeah very complicated i remember the days of s tunnel and having to set that up for everything it didn't support native encryption oh god i forgot about that and it was like i only barely half understood the encryption and how it worked and so i was just you know i mean the best of bad options and like we were remember we were waiting for amazon to support maxac they were like we're gonna support maxac we're like come on do it we want to stop with this dumb s tunnels and then like yeah 17 years later they finally announced max they finally do it yeah you told us i was gonna to be out like five years ago and we cared now we don't care we've already done all the pain so i did s tunnel and mongo db because it didn't support encryption for the longest time yep i remember that that was a fun one that was like i don't want to do that math in my head of how many years ago but like 15 years ago that was fun gave that disaster working in public preview azure api management's adding support for agent agent apis okay so wait You just supported MTLS and TLS and TCP on a low balancer, but you're already like, no, we're going to support A2A right away. Come on. But that's a different service. It's on the APIMS. It's two different worlds at Azure. We're in 1995, and we're in the cutting edge 2026 of AI agent to agent. There's nothing in between. So APIMS, if you want to use a single one, not in AHA, will also cost you $2,700. For what? Oh, good. Not in AHA. So if you want two in the same region, you're already at 58 math quickly, but I have 5,400. Then if you want to have multi-zonal support, you just might as well kiss your bill goodbye. I'm just going to take the outage at this point. It's probably going to be better for all of us. It's just going to self-heal in like five minutes, probably. It'll be fine. At least hopefully. If you're on V2, V1 did not self-heal that quickly. No. No. Don't ask me after the show how I know. Right. See that scar on his back? That's where the knife went in. Yeah. That will require drinks. Well, if you're looking for something to talk to your agent to agent. Oh, and how? I have Cloud OpenSport.5 available to you in Microsoft Foundry, GitHub Copilot, Paid Plans, and Microsoft Copilot Studio, expanding Azure's Frontier Model Portfolio following the Microsoft Anthropic partnership announced at Ignite. The model achieves 80.9% on the SWE benchmark, a software engineering benchmark, and is priced at one third the cost of previous Opus class models, making advanced AI capabilities more accessible for enterprise customers. Cool, it's in Foundry. Thanks. Hooray. I'm just happy to have any Claude still. Yeah, I wonder if they're going to move Copilot to Claude. That's like, I need to ask me. So you've been able to use? Yes, but it's going to AWS. AWS. US and in the documentation it says that so I assume when they get that what was it Iowa or whatever that new data yeah maybe at that point they will hey when is this coming I haven't really pointed out to Ryan too much because I didn't want to start asking questions like well how did they secure that data from Copilot from Azure to I didn't want to get into it with him but now he knows so I have to keep moving we've got a license agreement with someone else's and it's their job. Not my problem. Do you have to audit that they're actually doing something? Do you trust Microsoft's actually doing that? I want to think so. All I have to do is point to their ad cessation. It is absolutely true. Once I have a business agreement, I only have to secure the stuff that's in my responsibility boundary. The whole point of using a managed service is so that I can trust someone else to manage part of it. They're going to do a better job than I am. Most likely. I mean, back in the day, we had tested to Safe Harbor and that was going to solve all the problems. Then it was thrown out in court. So, you know, wait till that station's thrown out. Burst my bubble. I'm just saying you're lying on a legal construct. And you're married to a lawyer. You should know better. Well, that's part of why I have the opinion I do. You realize very quick how little it matters. What's this business agreement? yeah well something that comes is everyone's love as azure dns and now security policies now include a managed threat intelligence feed that blocks queries to known malicious domains thank god this feature addresses the common attack vector where nearly all cyber attacks begin with a dns query providing an additional layer of protection at the dns resolution level the service integrates with azure's existing dns infrastructure and uses microsoft's threat intelligence data to automatically update the list of malicious domains organizations can enable this protection without managing their own threat feeds or maintaining blocked lists, reducing operational overhead for the security team. This capability is particularly relevant for enterprises looking to implement defense and death strategies as it stops threats before they can even establish connections to command and control servers or phishing sites. Unless it's a zero-day attack and then you might be host. The feature works alongside existing Azure Firewall and network security tools to provide comprehensive protection. The general availability means the service is now production ready with full SLA support across Azure regions and pricing details, though were not specify the announcement so expensive almost certainly right but uh it is something like being able to automatically take the results of a feed i will do any day just because it's these things are they're updated by many more parties and and faster than i can ever react to and you know our own threat intelligence so that's pretty great i like it yeah i mean it's a great feature. It integrates with the firewall, so it's a good next step in a lot of their DNS and security world of it. I mean, their DNS product, at least from what I've seen, for what we do at using my day job, is pretty solid. So, hopefully this is just the next level on top of it, and they're building on those same building blocks. Yeah. But again, is this one of those things that you have to pay for? I mean, typically, if it's a managed sort of service where they're blocking it, you have to, but I wonder if there's anything built in. It's not my list to research more about for my day job. I just haven't had time yet, to be honest with you. Well, they just launched it, so it's fair. Yeah, but I like to play with new shiny objects. I know. Yeah. Who doesn't love new shiny objects? It's great. Azure is introducing another V2 product. I love these. The standard V2 NAT gateway in public preview, adding zone redundancy for high availability in regions with availability zones. This upgrade addresses a key limitation of the original NAT gateway by ensuring outbound connectivity survives zone failures, which matters for enterprises running mission-critical workloads that require consistent internet egress. The standard V2 SKU includes matching standard V2 public IPs that work together with the new NAT gateway tier. Organizations using the original standard SKU will need to evaluate migration paths, and since zone redundancy represents a fundamental architecture change requiring new resource types rather than in-place upgrades. This release targets customers who previously had to architect complex workarounds for zone-resilient outbound connectivity, particularly those running multi-zone deployments of containerized applications or database clusters. The preview allows testing of failover scenarios before production deployment. The announcement lacks specific pricing details for both the standard V2 tier, though NAT gateway typically charges based on hourly resource fees plus data processing costs, and I assume this will as well. The fact that this is just not an upgrade that I can check and I have to redeploy a whole new thing annoys the crap out of me. it does from microsoft's perspective it does make sense because ip addresses are not zonally redundant by default which also confuses me so ip addresses have to be a different zone so like from the way they've architected it but the fact that mac gateways were not ra multi-zoned still baffles my mind along with the fact that they've had to like re-architect this whole thing i mean there's other improvements because it's multi-zoned you get more speed and connections and everything else associated with it but it all just feels like this should have been built into a NAT gateway solution in v1 like out the door not v2 and the fact that not multi-zonal in the sense that you it's abstracted away or you literally have to like you have to deploy a NAT gateway in every single az you had to deploy a NAT gateway in every single az and it's not as easy as it is in aw to do that. Oh, man. That's a bummer. Because a NAT gateway can also have multiple IP addresses on AWS, so if you want to have enough traffic, it's not fun from an architectural perspective. So you can scale horizontally within a region by default, but not across regions? Or sorry, AZs? I believe so. You can't scale horizontally, but you can scale vertically. But given the fact that they moved their everything was equivalent to a public subnet before you had to have been dealing with this the last few months. So they pretty much got everybody on this thing and then said, oh, wait, and here's this thing that fixes all your problems that forced you to have private subnets and everything else recently. So it really feels like they should have done this one differently, but whatever. Yeah. I mean, my other issue with the naming of this whole thing is why can't it just be NATs with multi-AZs support versus not and be a checkbox and yeah it maybe requires redeployment or whatever fine in the background i don't know that's all noise to me i don't i mean how many people know their nat gateway outbound ip address is unless they have ip restrictions most probably don't but uh the bigger problem is just like when you you know have an outage because you weren't using the ultra disc now you can be blamed for not using the v2 version of the nat because of course v2 is better than v1 and so that's uh just more fun you get to explain to your boss when things go wrong that's one more number of course yeah yeah well then there's migrations you have to do so like apims if we were talking about above is apims v2 which is completely different from apm v1 and even the architecture on the azure side on the back end is completely different between them but it's just a v2 yes just a v2 again ask me how i know Yeah. The other cigar on his back. Well, if you're using Azure App Service, you can now hide those pesky 500 errors with a beautifully branded one because now it supports custom error pages. Moving to general availability, allowing developers to replace default HTTP error pages with branded or customized alternatives. This addresses a common requirement for production applications where maintaining consistent user experience during errors is important for brand identity and the user's trust. The feature integrates directly into App Service configuration without requiring additional Azure services or third-party tools. Developers can specify custom HTML pages for different HTTP error codes like 404 or 500, which app service will serve automatically when those errors occur. The capability is particularly relevant for customer-facing web applications, e-commerce sites, and SaaS platforms where error handling needs to align with corporate branding guidelines, and the feature works across all app service tiers to support custom domains and SSL certificates. No additional cost is associated with custom error pages beyond standard app service hosting fees, which start approximately $13 per month for the basic tier, which will run out very quickly and need the more expensive one. The general availability status means the feature is now production ready with full support coverage moving beyond the preview phase where it was available for testing. The documentation is available to you in the custom app service custom error page guide. I used to not care about these features until I became the owner of the WAF service and the company, and now I'm all about this. Don't blame me. It's the app. Before for an app service I dealt with this at my day job it crazy that this wasn already there The workarounds you have to do to make your own error pages and stuff like that was messy at best If an option at all, right? So many times it was just you had to deal with whatever, like public error page, generic. Well, then from a security perspective, you're leaking more of your infrastructure, what it's running on, And, you know, so, you know, you're just giving people more information, which isn't always, you know, wanted. Microsoft Foundry is reaching general ability as an enterprise AI governance platform that consolidates security, compliance and cost management controls for IT administrators deploying AI solutions. The platform addresses the growing need for centralized oversight as organizations scale their AI initiatives across Azure infrastructure. The service integrates with existing Azure management tools to provide unified visibility and control over your AI workload, allowing IT teams to enforce policies and monitor resource usage from a single interface. This reduces the operational overhead of managing disparate AI projects while maintaining enterprise security standards. Foundry targets large enterprises and regulated industries that require strict governance frameworks for AI development, particularly organizations balancing innovation speed with compliance requirements. The platform helps bridge the gap between data science teams pushing for rapid AI adoption and IT departments responsible for risk. The general availability announcement indicates Microsoft's positioning Azure as the enterprise AI cloud to compete directly with AWS and Google Cloud for organizations' prioritized governance alongside their AI capabilities, which makes sense. But again, I feel like everyone's saying that. It seems a little overloaded on top of Foundry, isn't it? Everything is overloaded on top of Foundry, right? like it's anything data related right just throw it in a foundry is just their version of vertex and sage maker that's what i just need to think in my brain with a really really nice power bi on top of it is it though okay i mean that's because i i always i still don't understand it's like it's like a combination of sage maker and vertex married databricks and had a baby yeah that's what foundry is plus a plus you know has a report interface like that's really what it is yeah I mean it's there I guess that's the confusing part for me was that the power bi sort of integration on these things was sort of like I mean like I mean if you think about it Google sort of has the same thing with uh Looker and Amazon has their thing that no one uses so they sort of have the same stories quick site yeah quick site so they all three kind of have that same story it's just one of them is good other two are good ish well one's good I guess Two of them are good. Two of them are good, and one is a half-ass temperature rotation of Tableau, I guess. I don't know. I'm still trying to figure out if they bought... Did AWS buy QuickSafe from somebody else originally? Because it's still a whole other interface. It definitely feels that way. It really feels like Macy, V1, and all the other tools they just kind of bought. Except for it's been decades. Yeah, and you're like, can you fix your shit? Yeah. I think they were trying to pull, because there's definitely the concept of being able to generate reports and show data for people that may not have sort of an Amazon Cloud principal ID. It sort of made sense to me, and they sort of followed the same path with the builder space and some of the CodeStar and CodeGuru sort of things that they did, but it just confused everything to me. yes quicksight was an acquisition of some small company that was trying to get into the space that they picked up probably for pennies and then didn't invest in clearly clearly yeah well it's never fully integrated it into their ecosystem which is not uncommon for things that people don't adopt so yeah i had high i had high hopes for it so then especially when they hired the old CEO from Tableau, I was like, ooh, maybe he's really like, this thing needs to get way better and I'll embrace some people from Tableau I know that can make this better, but no, that didn't happen. So, no. I guess Washington does have non-compete, so maybe that's why. Well, if you weren't excited about governance, risk, and compliance in Foundry, let me tell you, I got something else for you for Foundry, and that is a model router, which is now generally available as an AI orchestration layer that automatically selects the optimal language model for each prompt based on factors like complexity, cost, and performance requirements. This eliminates the need for developers to mainly choose between different AI models for each of their use cases. The service supports an expanded range of models, including the GPT-4 family, GPT-5 family, GPT-OSS, and DeepSeq models, giving organizations flexibility to balance performance needed against their cost considerations. This addresses a practical challenge for enterprises deploying multiple AI models where the different tasks require different model capabilities. For example, simple queries could route to smaller, less expensive models, while complex reasoning tasks automatically use the more capable models. the version layer integrates with Microsoft Foundry's broader AI infrastructure allowing customers to manage multiple model deployments through a single interface rather than building custom routing logic to each of the models. Notice if your pricing details are provided in the announcement, though cost will likely vary based on underlying models selected by the router and the usage patterns in your system. And this is a great area to also get caching. So I assume that'll come some point, maybe in 2075 if it's following the the NAT gateway v2 path or it'll be it'll be released next week. One of the two. It can only be one of the two. One of the two, yeah. I mean, I'm looking forward to using this just because, like we've talked about here, multiple times, using the right model at the right time and letting it kind of decide and or it's not called out in here, but I can definitely see it being used for availability of the model because I've definitely run into times in Azure where I can't get the tokens even though I'm like here's my money take my money please take my money they're like no we don't have capacity so here it's like okay well you don't have this one this next one's good enough so if you can shove the shim layer in and let it be the routing tool for you hopefully life becomes better and I'll have to think about a lot of these things and they can kind of handle it. Azure is getting scheduled tasks or as they call them scheduled actions, a feature which is now generally available, providing automated VM lifecycle management at scale with built-in handling of subscription throttling and transient error retries. This eliminates the need for custom scripting or third-party tools to start, stop, and deallocate VMs on a recurring schedule. And all I can see is a Windows box inside the Microsoft Azure environment running schedule tasks. That's all I can see. The feature addresses common cost optimization scenarios where organizations need to automatically shut down development and test environments during off-hours or scale down non-production workloads on the weekend. Screweders compute costs by 40 to 70% for environments that don't require 24 by 7 availability. I mean, I really appreciate that they got this. Again, thank you for copying every other cloud that's had this forever. And none of the clouds though have what I, the feature I really want, which is the, hey, I just, I just knocked on the door of this, these services that are offline and please hold a second while I start them up because you made a request and how long would you like to use those services? Which would be amazing and no one's built that. And I keep thinking someone will, but no one has. yeah it it's still you know like i've done similar things with like complex pipelines and you know like where you do a whole bunch of things but you always have to build it all yourself the spaghetti of it and then you end up with this like crazy cold start issue where people are mad because they don't know that it's turned off but it is sort of like it it is funny to me that they're just releasing this in a jour because i just i'm like it just assumes that this has always been in place. But I guess not. I assume a lot of things are in place and then I go look for them and they don't exist. So I've learned to stop assuming that with Azure. And this is what I'm like. Yeah. I guess now that I think about it, I've always had to manage the orchestration like timing outside. I don't know. I can't think of actually being able to set. AWS definitely added this feature. AWS definitely added this feature a couple years ago. I mean, it had to be at least five years ago at this point. It just feels like, you know, this is if you're running your corporate or dev environment or stuff like that, or you've lifted and shifted and you still want to be able to turn on and off stuff for cost savings. Because this isn't then at that point leveraging scale sets, auto scaling groups, you know, whatever you want to call them, or leveraging any other automated scaling thing. This is, hey, I have 15 dev boxes and I want to turn them off on the weekend when our developers aren't working. or I manually have to configure these Windows boxes or this Linux box to do this thing because we haven't automated it. Cool, let me turn it off and we're not using it. Another thing, there's not good use cases for it. Install Shield is the one that comes to mind or any other tool that is Mac address-based licensing. So there's definitely reasons for it. And I've definitely written enough custom code to do this in my career probably multiple times at this point. So it's a useful feature. is just still like one of those why wasn't this there a while ago? That's the theme of Asher this week. I mean, scheduled tasks was hard to scale at this level, so... Or maybe they finally discovered CronTap and Cron, I don't know. Well, that is another fantastic week here in the cloud, minus AWS. So I guess we will wrap it up here. We do have an after show today, so for those of you who like to hear us talk about silly things, stick around. after the close to hear us talk about those. But have a great week, and we'll see you next week to do an Amazon extravaganza. Content. Woo! Bye, everybody. Bye, everyone. And that's all for this week in cloud. We'd like to thank our sponsor, Archerra. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our website at thecloudpod.net, where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening, and we'll catch you on the next episode. We have an after show, and to talk about how AI is infecting the world, the most recent example of that is that NORAD the people who handle missiles in defense of the country but also have always had a cute little annual tradition of tracking Santa Claus as he crosses the world delivering presents to billions of children across the world has decided to partner with OpenAI basically they're partnering with NORAD to add AI powered holiday tools to the annual Santa tracking tradition of creating three chat gpt based features that turn kids photos into elf portraits generate custom toy coloring pages and build personalized christmas stories this represents a consumer-friendly application of generative ai that demonstrates how large language models can be packaged for mainstream family use during the holidays the collaboration shows opening eye pursuing brand building partnerships with trusted institutions like norad to normalize ai tools and everyday contacts from a technical standpoint these tools showcase practical limitations of image generation and yada yada yada and uh although overall i'm like keep your ai away from my santa claus that's what i want to say i will say i've never thought about using one of the ai image generation models to do like sketches for my kids to draw on oh i've done that yeah that's that's i've never thought about that one i mean i use it for stories with them and you know lots of other fun things with them but never that that that's a pretty good interesting use case i've never thought of was you know because my daughter's into coloring a lot right now and that would be very interesting to do so that will be my weekend activity and there goes my printer we can't hear it so it's fine no i meant like well he's just gonna he's not he's not gonna be able to afford ink i mean also i'm also you're also assuming your printer's gonna work because you when was the last time you used it my wife probably did for our return that's the number one use of my printer right now is return levels too so weird yeah in the holiday season we've we sent cards and we just print them but that's about it yeah it's used about four times a year the printer's probably like seven eight years old at this point still just works because it i'm probably still in the first ink cartridge well that's that's impressive because those things are not even full you get like one print out of those these days yeah and then it's like you need more ink and then it's so as you need this weird color but i'm printing black and white you say i was funny being sorry you need no my printer's just the black and white laser and anything else we just yeah i did upgrade to a color laser printer so uh and then i bought fuser or toner for it one time and i probably will never buy toner for it ever again in my lifetime because it'll it has you know like 45 000 page cycle count on the toner cartridges i'm like yeah that's not gonna happen anytime soon so yeah but uh marry a lawyer that you can go through that pretty quick yeah i don't i don't have that problem uh i need i go through inkjet ink like it's going out of style for photos uh when brandon needs to print out photos you know for something although we normally print them out kinko's and or kinko's what i guess it's not does kinko's exist it's part of fedex fedex office but uh yeah you know whatever we just go there and print out there because it's that's what we do cheaper faster at this point but uh i do sometimes just had to print something out here when necessary so it's all good yeah so anyways and i need to leave my santa clothes alone that's my opinion although i'll send you my elf photos yeah i'm totally gonna i'm totally gonna make elf photos i'm gonna make i'm gonna abuse my like older children who are not gonna find this fun at all well maybe maybe uh before you know closer to christmas we can share our elf photos and drawing pages that you've made. I like it. You can make all your kids draw, color the CloudPod logo. That could be fun. It could be fun. See if they're suitable for public consumption. I can't even log into my printer's web interface. Well, you're starting out strong. I was curious. I don't even know what the IP address on my printer is. I just know that it shows up in the printer thing. I probably need to update it. I just logged into my Unify in order to get my IP address on my printer. To figure all this out. I went down that hole. You know Unify will do static IPs and host names, right? That would require me to actually know what the host name I would have sent was. Again, don't use the printer that often. Mine's called Printer. It's not local. and this is why we have v2 ryan's because when you print your ties it's going to be printer v2.local this is how azure named stuff got it azure and ryan have the same naming convention yeah it's true well i'm sure our listeners love hearing you guys say about your printers we're gonna let them go now this is fantastic content you're right yeah it is the after show it is the after show that's true bye everyone