Welcome to the Tech Brew Ride Home for Thursday, January 8th, 2026. I'm Brian McCullough. Today, Grok and X have been letting people create some risque and in some cases, probably illegal images and governments around the world are getting pissed. Does Google think AI can obviate your entire email inbox? And would you upload your medical history to ChatGPT? Sam Altman is asking you too. Here's what you missed today in the world of tech. You may have noticed that your customers love webinar and video content, but if you've ever put together a webinar or video, then you know that it can eat up a lot of your time and budget. But now, thankfully, there's a singular tool that can streamline your team's video and webinar workflows, Wistia. Wistia can scale your content output with AI-powered tools that help you create, edit, and repurpose videos and webinars fast. And speaking of webinars, you can host engaging, easy-to-setup webinars in Wistia, too, complete with built-in analytics. With Wistia, you don't have to pay for multiple video tools, hop between platforms, or constantly re-upload files. Create, edit, collaborate, and publish all in one place. Head to wistia.com slash brew to learn more. That's W-I-S-T-I-A dot com slash brew. With Wistia, you can expect less work and more plays. So, I was kind of really hoping not to have to talk about this. No such luck, because at this point, I wouldn't be doing my job if we didn't talk about it. So, we're going to talk about it. Last week, people noticed that people on X were posting sexualized images of people, including minors. Apparently, on X, you could ask the AI agent Grok to generate these images in response to various X-user prompts. X did take some of these images down, and XAI has said that this happened because of a, quote, lapse in safeguards, end quote, on Grok, but it kind of hasn't stopped. Next, on top of non-consensual porn images, people noticed that X-users seemed to be using Grok to alter images to depict real women being sexually abused, humiliated, hurt, and even killed. The nation of India ordered X to stop Grok from generating obscene content, giving it 72 hours to submit an action-taken report or risk losing its safe harbor protections in that country. This was followed by stern warnings from Malaysia, various countries in Europe. At one point, a researcher estimated that between January 5th and January 6th, So earlier this week, 85% of all Grok images were sexual images. And this morning, the EU says it is ordering X to retain all internal documents and data relating to Grok until the end of 2026 after criticizing Grok's non-consensual image generation. And Wired says things are even crazier on Grok's web app. Quote, Elon Musk's Grok chatbot has drawn outrage and calls for investigation after being used to flood X with undressed images of women and sexualized images of what appear to be minors. However, that's not the only way people have been using the AI to generate sexualized images. Grok's website and app, which are separate from X, include sophisticated video generation that is not available on X and is being used to produce extremely graphic, sometimes violent, sexual imagery of adults that is vastly more explicit than images created by Grok on X. and may also have been used to create sexualized videos of apparent minors. Unlike on X, where Grok's output is public by default, images and videos created on the Grok app or website using its Imagine model are not shared openly. If a user has shared an Imagine URL, though, it may be visible to anyone. A cache of around 1,200 Imagine links, plus a wired review of those either indexed by Google or shared by a deepfake porn forum, shows disturbing sexual videos that are vastly more explicit than images created by Grok on X. By the way, the article goes on to describe some of these images, which I'm not going to do. Quoting again, The creator of Grok, the Elon Musk-owned artificial intelligence firm XAI, did not respond to Wired's request for comment about the explicit videos created with Grok Imagine. Since Grok started flooding social media platform X with AI-generated sexual photos of women and what appear to be minors more than a week ago Musk and X have stated that they take action against child sexual abuse material Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content Musk has posted on X Like other tech firms that are consistently battling a deluge of CSAM, XAI's policies state that, quote, sexualization or exploitation of children is prohibited on its services, as is, quote, any illegal, harmful or abusive activities. The company also has processes in place to try to detect and limit CSAM material being created. In September, a Business Insider report, for which the outlet said it spoke to 30 current and former XAI workers, found 12 of these staff members had, quote, encountered both sexually explicit content and written prompts for AI CSAM on its services. The workers described systems that try to detect AI CSAM and prevent the artificial intelligence models from being trained on the data. Apple and Google, which make Grok available on their app stores, did not respond to Wired's request for comment. Unlike other major generative AI companies such as OpenAI and Google XAI, has allowed Grok to create AI pornography and adult material. Previous reporting has noted how it is possible to create hardcore pornography with Grok, which has a spicy mode. Quote, if users choose certain features or input suggestive or coarse language, the service may respond with some dialogue that may involve coarse language, crude humor, sexual situations, or violence, XAI's terms of service say. Over the last few weeks, and now this, it feels like we've stepped off the cliff and are free-falling into the depths of human depravity, says Claire McGlynn, a law professor at Durham University and an expert on image-based sexual abuse, who says she is, deeply concerned about the Grok videos. Some people's inhumane impulses are encouraged and facilitated by this technology without guardrails or ethical guidelines. McGlynn says that allowing AI-generated porn that isn't attempting to depict a specific real-life person raises a host of questions about what protections are put in place to try to prevent potentially unlawful pornography, such as depictions of bestiality or rape, and the impact it can have. For me, the issue then becomes the impact if there is a free-for-all on the nature of porn created and then shared that normalizes and minimizes sexual violence, McGlynn says, while noting that explicit AI images and videos of real people are already unlawful in a number of countries, end quote. Google has rolled out an AI inbox view for Gmail showing users to-dos and summaries of topics rather than a traditional email list. This is coming first for US, quote, trusted testers, Quoting The Verge, It's a potentially huge shift in how you might navigate your Gmail, especially if you have a lot to sort through, or if you, like me, already use your inbox as a to-do list. In a demo video, AI Inbox suggests tasks like rescheduling a dentist appointment, replying to a coach, and paying a sports tournament fee, and also summarizes topics to catch up on like a team's soccer season and a family gathering. Google is initially rolling AI Inbox out to trusted testers in the US using browsers, and it will be available first for consumer Gmail accounts. You can't use it with workspace accounts yet. There's also not yet a way to mark if you have completed one of the suggested items. It's something Google is working on, according to the company's VP of product for Gmail, Blake Barnes, meaning that Gmail won't yet know if, for example, you call somebody based on Gmail's recommended action rather than emailing them. Barnes also says there's no limit to the number of to-dos Gmail might suggest. While AI Inbox tries to prioritize what's important to you based on signals like who you email and what things you respond to the quickest, too many to-dos could just perpetuate Inbox overwhelm but with a new design. Still, given how much of our lives flows through our inboxes, if AI Inbox is even somewhat successful at making timely recommendations and summarizing important emails, the feature could be quite useful. All consumer Gmail users are also getting suggested replies with personalization, AI overviews for thread summaries, and Google's Help Me Write tool, all features Google has previously included with paid plans at no extra cost. Subscribers to Google One AI Pro $19.99 per month and Ultra $249.99 per month plans in the US will be getting a Grammarly-like proofread feature, as well as AI overviews in search results, both available in browsers. Google's example for the latter is, who was the plumber that gave me a quote for the bathroom renovation last year If you don want to use AI features in Gmail you can turn them off though that disables other smart features like spell checking The company also says that it doesn use Gmail content for training its Gemini AI models end quote It's the holidays, which means you're probably trying to figure out what to get the people in your life who live in back-to-back meetings. This isn't some sci-fi concept, it's PLAUD, P-L-A-U-D. It snaps onto the back of your phone and records phone calls, meetings, and conversations. This isn't just note-taking, though. It can summarize meetings, generate to-do lists, draft emails, extract insights, analyze perspectives, and help you make better decisions, all with full contextual awareness across your past conversations and meetings. Black Friday is coming, and PLOD is giving TechBrew Right Home listeners 20% off. Search P-L-A-U-D on Google or Amazon and get 20% off. OpenAI has unveiled ChatGPT Health, which lets users import medical records and other data from wellness apps into ChatGPT, available to a small group via a waitlist. Quoting The Verge, ChatGPT Health is a sandboxed tab within ChatGPT that's designed for users to ask their health related questions in what it describes as a more secure and personalized environment with a separate chat history and memory feature than the rest of ChatGPT. The company is encouraging users to connect their personal medical records and wellness apps such as Apple Health, Peloton, MyFitnessPal, Weight Watchers, and Function, quote, to get more personalized, grounded responses to their questions. It suggests connecting medical records so that ChatGPT can analyze lab results, visit summaries, and clinical history. My Fitness Pal and Weight Watchers for food guidance, Apple Health for health and fitness data, including movement, sleep, and activity patterns, and Function for insights into lab tests. On the medical records front, OpenAI says it's partnered with BeWell, which will provide backend integration for users to upload their medical records since the company works with about 2.2 million providers. For now, ChatGPT Health requires users to sign up for a waitlist to request access as it's starting with a beta group of early users, but the product will roll out gradually to all users regardless of subscription tier. The company makes sure to mention in the blog post that ChatGPT Health is quote not intended for diagnosis or treatment, but it can't fully control how people end up using AI when they leave the chat. By the company's own admission, in underserved rural communities, users send nearly 600,000 healthcare-related messages weekly on average, and 7 in 10 healthcare conversations in ChatGPT happen outside of normal clinical hours. In August, physicians published a report on a case of a man being hospitalized for weeks with an 18th century medical condition after taking ChatGPT's alleged advice to replace salt in his diet with sodium bromide. Google's AI overview made headlines for weeks after its launch over dangerous advice such as putting glue on pizza, and a recent investigation by The Guardian found that dangerous health advice has continued with false advice for liver function tests, women's cancer tests, and recommended diets for those with pancreatic cancer. In a blog post, OpenAI wrote that based on its de-identified analysis of conversations, more than 230 million people around the world already ask chat GPT questions related to health and wellness each week. OpenAI said that over the past two years, it's worked with more than 260 physicians to provide feedback on model outputs more than 600,000 times over 30 areas of focus to help shape the product's responses. ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and work routine, or understand the trade-offs of different insurance options based on your healthcare patterns, OpenAI claims in the blog post. One part of health that OpenAI seemed to carefully avoid mentioning in his blog post was mental health. There are a number of examples of adults or minors dying by suicide after confiding in ChatGPT, and in the blog post, OpenAI stuck to a vague mention that users can customize instructions in the health product, quote, to avoid mentioning sensitive topics. When asked during a Wednesday briefing with reporters on whether ChatGPT Health would also summarize mental health visits and provide advice in that realm, OpenAI's CEO of Applications, Fiji Simo, said, quote, mental health is certainly part of health in general, and we see a lot of people turning to ChatGPT for mental health conversations, adding that the new product can handle any part of your health, including mental health. We are very focused on making sure that in situations of distress we respond accordingly and we direct toward health professionals as well as loved ones or other resources It also possible that the product could worsen health anxiety conditions such as hypochondria. When asked whether OpenAI had introduced any safeguards to help prevent people with such conditions from spiraling while using ChatGPT Health, Simo said, we have done a lot of work on tuning the model to make sure that we are informative without ever being alarmist and that if there is action to be taken, we direct to the healthcare system, end quote. When it comes to security concerns, OpenAI says that ChatGPT Health, quote, operates as a separate space with enhanced privacy to protect sensitive data and that the company introduced several layers of purpose-built encryption, but not end-to-end encryption, according to the briefing. Conversations within the health products aren't used to train its foundation models by default, and if a user begins a health-related conversation in regular ChatGPT, the chatbot will suggest moving it into the health product for additional protections, per the blog post. But OpenAI has had security breaches in the past, most notably a March 2023 issue that allowed some users to see chat titles, initial messages, names, email addresses, and payment information from other users. In the event of a court order, OpenAI would still need to provide access to the data where required through valid legal processes or in an emergency situation, OpenAI head of health, Nate Gross, said during the briefing. When asked if ChatGPT Health is compliant with HIPAA, Gross said that in the case of consumer products, HIPAA doesn't apply in this setting. It applies toward clinical or professional healthcare settings, end quote. I'm flying back home after I post this today, so this is probably time for me to give you a little wrap-up of what I saw here at CES. Yesterday, I did go see the Transportation and Mobility Hall, and while it's not like there were no cars, I can see what they were talking about yesterday. In years past, automakers had big, flashy reveals of concept cars, huge, flashy stages and the like, and there just wasn't any of that, basically at all. I also made it to what I call the Odds and Enns Hall, where a thousand brands you've never heard of hawk their wares. It's a good place to see trends in products years ahead of time. 15 years ago, that whole place was all about phone cases. The equivalent to that today is power bricks and smartphone battery boosters, you know, like MagSafe stuff. One trend I'm going to call right now, you are about to see four to five different brands launch Color E Ink wall art, and actually, it makes a lot of sense. I kind of want to get one right now. You upload photos, hang the frame on your wall, and you can have a different photo or a different piece of art every day. If you do it right, because it's E Ink, a single charge can last you a whole year. And I saw these in person. Color E Ink has reached the tipping point where, again, if done well, it looks really, really good, like better than a screen for the explicit purpose of things like, you know, art on a wall. Also, the Mode X bomber jacket has pockets that will wirelessly charge your phone when your phone is in the pockets. Home battery and backup power solutions were everywhere. Robotic and AI and camera-enabled bird feeders were weirdly everywhere. Robotic pool cleaners were weirdly everywhere. Though I did see this one cool pool thing. I posted a video of it on X yesterday. So you take this small, maybe like two foot tall, like boxy device that has what looks like a fan on it, and you just hang it on the side of your pool in the water. You don't have to install anything. You literally just hang it. And it generates a strong current of water so that if you go into that current of water, presto, in your average, regular, everyday pool, in a pool of any size really, you can swim laps in place against that current for exercise. What else? Finally, I saw, well, there's no real way I can describe this to you. You have to check it out for yourself. It's called The Handy. You can find out about it at thehandy.com. I am not being paid to endorse this, and I am not being coy by avoiding talking about it. Let's just say Handy has invented a new mechanical way to do something that I'm not entirely sure you should trust a machine to do. Talk to you tomorrow.