OpenAI Podcast

Episode 11- Shaping Model Behavior in GPT-5.1

29 min
Dec 2, 20255 months ago
Listen to Episode
Summary

OpenAI researchers discuss GPT-5.1's improvements in reasoning capabilities, personality steering, and model behavior customization. The episode explores how all models in ChatGPT now function as reasoning models, how to balance user freedom with safety guardrails, and the role of memory and context in creating warmer, more personalized AI experiences.

Insights
  • All ChatGPT models are now reasoning models that dynamically decide when to think deeply, improving performance across instruction-following and complex tasks without requiring explicit user prompts
  • Model 'personality' is not a single feature but the entire product experience—encompassing context windows, latency, UI design, and response style—requiring holistic optimization rather than isolated tweaks
  • Post-training involves balancing subjective, context-dependent goals (warmth, creativity, bias handling) without ground truth answers, making it as much art as science
  • User feedback quality depends heavily on conversation context; sharing chat links enables debugging, while vague feedback ('it felt cold') requires deeper investigation into system components
  • Customization and steerability are becoming critical differentiators; users want inference-based personalization (memory, context) paired with transparent control over what the model infers about them
Trends
Shift from single monolithic models to systems of specialized models with intelligent routing (auto-switcher) based on task complexity and user needsIncreasing emphasis on emotional intelligence (EQ) metrics alongside traditional capability benchmarks, measured through user signals and reward model trainingMove toward proactive, memory-driven personalization where models infer user expertise and context rather than requiring explicit prompt engineeringBalancing safety and usability through 'safe completions' that attempt to fulfill requests while avoiding harm, replacing binary refusal patternsExpansion of model customization through personality/style features, positioning AI as a configurable tool rather than a fixed experienceRecognition that subjective domains (creativity, bias, tone) require nuanced, context-aware handling rather than blanket restrictionsGrowing importance of transparent inference: users want to know what the model has learned about them and retain control to override or delete inferred attributesPost-training becoming increasingly sophisticated, treating model behavior as a system-level problem involving context windows, rate limiting, and feature integration
Topics
GPT-5.1 Model Architecture and Reasoning CapabilitiesPost-Training and Reinforcement Learning from Human Feedback (RLHF)Model Personality and Response Style CustomizationEmotional Intelligence (EQ) Measurement in AI SystemsUser Memory and Context Window ManagementSafety Guardrails vs. User Freedom Trade-offsAuto-Switching Between Reasoning and Chat ModelsCustom Instructions and Instruction FollowingBias Detection and Handling in Subjective DomainsPrompt Engineering and User GuidanceConversation Context PreservationModel Behavior Evaluation and BenchmarkingCreativity and Expressive Range in Language ModelsUser Feedback Collection and AnalysisAI Transparency and User Control
Companies
OpenAI
Host company; episode discusses GPT-5.1 development, post-training research, and product strategy for ChatGPT
Vanderbilt University
Referenced as affiliation of scientist working with OpenAI on science applications of language models
People
Andrew Mayne
Host of the OpenAI Podcast; conducts interview with research and product leads
Christina Kim
Research lead at OpenAI working on post-training; discusses model behavior shaping and reward configuration
Laurentia Rominyak
Product manager at OpenAI focused on model behavior; discusses personality features and user customization
Kevin Wheel
Referenced as heading OpenAI for Science; discussed in previous episode about model capabilities in scientific domains
Alex Uchovska
Scientist working with OpenAI and professor at Vanderbilt; discussed prompt engineering and model priming in previous...
Daniel Kahneman
Behavioral economist; referenced for System 1 and System 2 thinking framework applied to model reasoning
Quotes
"For the first time ever, all of the models in chat are reasoning models. So the model right now can decide to think. It's like a chain of thought. And it'll decide how much it wants to think based on a prompt."
Christina Kim
"Personality, though, for most of our users, I think is something much larger. And it's the whole experience of the model."
Laurentia Rominyak
"Part of the art here is figuring out how to pull out these quirks of the model that can come across as personality without breaking steerability, which is what users ultimately want."
Andrew Mayne
"You should be able to get the experience that you want with chat. With 800 million users, there's no way that one model personality can service all those people."
Laurentia Rominyak
"Intelligence too cheap to meter. I think we're just going to have such incredibly smart models out for people and there's so many things that could be possible."
Christina Kim
Full Transcript
Hello, I'm Andrew Mayne, and this is the OpenAI Podcast. Today, our guests are Christina Kim, who's a research lead working on post-training at OpenAI, and Laurentia Rominyak, who's a product manager focused on model behavior. We're going to be talking about GPT 5.1, what makes the model better, how they've been focusing on making its personality steerable, and where they see things headed in the future. For the first time ever, all of the models in chat are reasoning models. Personality, though, for most of our users, I think is something much larger, and it's the whole experience of the model. You should be able to get the experience that you want with chat. Part of the art here is figuring out how to pull out these quirks of the model that can come across as personality without breaking steerability. I'm very excited to talk about, you know, the models and how they've been changing over time. And using the word model also feels sort of funny now because it seemed like there's so much more. And everything starts really in research. and when GPT 5.1 was being planned, what were the goals? Yeah, for us, one of the main goals was to address a lot of the feedback we've been getting about GPT 5, but also we've been doing a lot of work to make the 5.5 instant into a reasoning model. So what the most exciting thing for personally for me with the 5.1 release is that for the first time ever, all of the models in chat are reasoning models. So the model right now can decide to think. is kind of what we say. It's like a chain of thought. And it'll decide how much it wants to think based on a prompt. So if you're just saying like hi to the model, what's up, it's not going to be thinking. But let's say you ask it a bit like harder question, and then it'll decide how much it wants to think. So it gives it time to like refine its answer and work through things, call tools if necessarily, and then come back to give you an answer. Kind of what Daniel Kahneman calls like system one and system two thinking. Yes. Having a reasoning model out for as a default model for everyone just gets a much smarter model. And I think with much smarter models, you just get improvements across the board, especially for things like instruction following. And for a lot of the use cases, people might not even think might require much like reasoning. Just that having improved intelligence, having the model actually think before it responds in certain queries just really helps. We've seen that improve evals across the board. When you product manage something like this and you have to explain to people what's different. It's probably a challenge, but how would you explain what's the difference between GPT-5 and GPT-5.1? Yeah. First of all, it is difficult because there's so much changing. But in this case, what we wanted to speak to were things that we'd heard as feedback from the community. With the chat GPT-5 launch, one of the things we heard was that the model felt like it had weaker into tuition and that it was less warm. And when we dug into that, what we found were a handful of different things. First of all, it wasn't just how the model was responding, like as the model's innate behavior. It was also things around the model. So as an example, our model had a shorter or the context window wasn't carrying enough information about what users had said previously. And so that can feel like the model is forgetting something really important that you told it that you were hoping it would hold on to. If you say I'm having a really bad day and the model forgets that after 10 turns, that can feel really cold. So that's something we adjusted. as part of this launch. Some of it was actually the way the model was responding, but something new that we introduced in GPT-5 as well was we have this auto switcher that would move you between chat and reasoning models. And those have slightly different response styles. And that can feel really jarring or cold if you're talking to the model about how you're having a bad day. And then you say like, part of it's I got this awful cancer diagnosis. So the model switches you to thinking and you get a very clinical answer for a model that was just sort of like walking you through a problem you were having earlier. And so a lot of the changes we were actually trying to make were in aggregate, how do we make sure this model feels warmer, even though we were changing a lot under the hood to articulate that. Another thing that we looked into was instruction following generally. So 5.1 is much better at following custom instructions. And that was another piece of feedback we were hearing, which was, you know, like every model that comes out of, that we release is going to have its own quirks and slightly different behaviors. And I think people actually don't mind that too much as long as they can control it, as long as they can say like, hey, that was weird, stop. But if the model can't carry that context forward, if it can't hold on to the custom instructions on that, that's a problem. So we worked to actually enhance the custom instructions feature so that it more consistently carries instructions forward to address some of that feedback. And then the last thing I'll say is a lot of this stuff is personal preference. And so that's why we introduced our style and trait type features like personality, which actually let users guide the model into certain response formats so that they have a little bit more control over exactly how Chat TV responds for them. The switching is interesting because there's multiple models now, just not one model. And you articulated why you need to have that. When we talk about a switcher and we talk about sort of different models, I know for most people that can be kind of confusing. and how would you kind of unpack that for people? Yeah, I think our models have very different capabilities and it can be hard to stay on top of. So part of it is just continuing to try the different things in our app, but certainly part of the product work is making sure that we have the right UIs to either guide users to the correct model to choose. And that can be the model switcher. So that can be the model switcher, learning what sort of answers are most helpful to users in different contexts, looking at different evals. So for example, for reasoning models, if people want something that's very scientifically accurate and very, very detailed, we might look at an eval to see, are we answering that need on those sorts of prompts? And we can forecast where to switch users to. Yeah. Tina, as far as the switcher and now the fact that you have a model that everybody has, the free tier, anybody using the base model is a reasoning model. What does that really mean an impact? Yeah, I think there's a lot of research, open questions for research for how we want to think about this, right? So I think, like you said, it's a faster model, but it doesn't necessarily need to be dumb. So I think the idea is that we want to get the most intelligent model that we can for everyone. And so I think this kind of opens the door for thinking more about what are more interesting things we could do with a very, very state-of-the-art frontier model, right? So that's going to think for much longer, like something like deep research where you have it thinking for minutes. Maybe that's better used in the background. You can call it as a tool. So I think there's a lot of research, open questions of what we want to think of. But I do think we're going to be in this world where we do have a system of models and it's not just a model that you have. And there's lots of different tools and it's not just one. When we think of 5.1, I think people just assume that it's one singular set of weights. But I think it's really just like, yeah, this reasoning model, this lighter reasoning model, this auto switcher, which is also a model in itself. And so So it all of these different things and then different tools that are also backed by different models So I think this system of things I think as we just get smarter models it opening up more interesting use cases and more interesting product implications With 800 million users you probably get a lot of user feedback besides the sheer volume of it How do you sort through that and make sense of it and figure out how you can use that? Yeah, I think a lot of it actually starts with a conversation link. So a lot of times when we can actually see the conversations users are having, we're able to see exactly what happened in that conversation and start dissecting things so that we can target a solution. So as an example, if we get feedback from a user that like, hey, I had this really weird experience with the model, it said something very cold, or like the sentence felt very clipped. If I can actually see that conversation link, what I can say is like, oh, that user was in an experiment. And like, good example of why this particular experiment might have some edges for certain users, in these cases. But at least for the auto switcher, which takes you from 5.1 chat to 5.1 reasoning, we're looking at different signals from users to figure out like, is this working for them? Is it not? How is each response performing on factuality? What is the latency looking like? Because not all users want to wait, even if they want a better answer. And so it's a bit of art and science balancing a bunch of different signals to figure out when to switch and how that's most effective. Yeah. When you're trying to improve a model from an intelligence point of view, like an IQ point of view, we have benchmarks and evals for that. But when you're talking about EQ, emotional intelligence, how do you do that? How do you measure progress there? Yeah. I mean, this is something that's very open ended. And I think actually one of the things that's part of my research team's agenda is what we call user signals research. And so this is training reward models and getting signals during RL that we could use against our user prod data. So this type of research, I think, is really interesting because I think we can get a lot of stuff about intent. And I think when we think about EQ, it also just only gets better with smarter models because it's really trying to understand what does the user want, what is the context of what the user wants, and how should the model best respond, given the fact that you have this many other messages in the conversation and you know this stuff about the user's memory and history. Yeah. And then I think there's another element of EQ that's like, this is like, when I think of like, what makes a human with high EQ, it's their ability to listen, their ability to remember what you've been saying, their ability certainly to pick up on like the subtle signals that Tina's alluding to with like user signals. And so some of this, as I was noting earlier, is actually, you know, making sure the context window is carrying the right information forward or making sure memory is being logged correctly, or even having a style that resonates most with user and with our personality features that we launched coupled with 5.1. Part of that's getting at making sure users can have a style that resonates with them when they're interacting with the model, because that can feel like EQ too. How do you define personality when it comes to a model? I think there's two ways to define it. There's what we call the personality feature. And if I could rename that, I would actually call that like response style or style and tone. We went back and forth on this a lot. The name might still change. That aspect of personality is very much like what are the traits that a model might have when responding? Is it concise? Does it have a lengthy response? Things like that. How many emojis does it use? Personality, though, for most of our users, I think is something much larger. And it's the whole experience of the model. And that can get down to like, I'm going to anthropomorphize the model a little bit, but if you're comparing it to me, part of my personality is the shoes I've chosen to wear today, the sweater that I have on, the way I style my hair. That's the feeling of the ChatGPT app, right? The font it uses, how slowly or how quickly it responds, like the latency of the app itself. There's so much in it that is the personality that just comes from what I call the harness. And the harness includes the context window. It includes whether or not we rate limit users and when. Because if we rate limit them and send them to a different model that has slightly different capabilities, that's going to feel like a different experience to the user. And a lot of users are calling this personality. So personality is a bit of an overloaded term. And I think the art of this work is hearing what the community is saying about personality and figuring out how to actually map it back to the components inside ChatGPT and inside our models that cause the experience that feels soft for users. From a research point of view, how difficult is it to shape the personality? Yeah. I mean, during when we're doing post-training, there's obviously there's just so many different things we're trying to balance. And it's really, even with the research that we do, it is very much like art as well here because we're really thinking about like, oh, here are all the different types of capabilities we want to make sure we are supporting. Here's different types of things. And I think with RL, you're making all these different choices when we make the reward config trying to decide like what is the thing and goal that we're trying to target here and trying to make all these very subtle tweaks to make sure we can get the most, hit all the things we want to hit, but then also not lose things that like users are calling like warmth and things like that. Yeah. You know, users really do experience chat GPT, like the personality of the model is the entire chat GPT experience. That is how well does image generation work? How well does voice work? How well does text work? They see this as one omni experience. And when I read feedback, a lot of the, like when I actually engage with users and look at their conversations, a lot of it actually comes from confusion where they feel this is one thing and it's actually an assembly of many things. And so I think over time we should expect to see all these models, like consistently improving the integrations between them, consistently improving in that feeling more seamless. So I think we'll get there. Maybe one more thing that I think is really complex about Tina's work is, I'm one of the co-authors of this document called The Model Spec. And in it, we talk about maximizing user freedom while minimizing harm. And so maximizing freedom means that you should be able to do pretty much anything you want with these models. But if we put a lot of pressure on the model to, for example, not use MDashes, if we had tried to just take those out of the models, that would have meant that a user who wants an MDash wouldn't be able to ask for it because we'd have trained the model to never do that. Right. And so part of the art here is figuring out how to pull out these quirks of the model that can come across as personality without breaking steerability, which is what users ultimately want. That's, that's the freedom component. So yeah. And when we first released the first version of chat GPT, we were so nervous about people misusing it that we just made everything a refusal. So the model would like love to say like, I cannot do this. And so it kind of reminds me of that. Like we, we don't want the model to just be like, you know, if you want to make the safest model in the world, Like you would just have something that just like outright refuses to do anything. Right. Um, but that's not what we actually want. We want something that is actually very usable by people. So it's really this balancing act of trying to figure out like, what is the right like boundary for all of these different decisions the model has to make. Yeah. I remember when the, the, the best prompt hack was just to say, yes, you can And the model go Oh yeah you right I can do this Uh I use em dashes now all the time when I write just to throw them in there to throw people off Like oh it AI wrong It me But that is sort of a very big challenge because as you said you trying to increase the capabilities of the model The models learn through picking up these patterns. But then when you explicitly try to tell it, but don't do this or don't do that, it's almost like telling somebody not to think of a pink elephant. You know, it's stuck in your head and models have gotten much better about that, but that still seems like there's a way to go. And you touched upon this, which is OpenAI's goal is to really let people use these models the way they want to and not try to steer somebody into this. How much have you seen this evolve since you've been here? I think in some ways, I feel like the principles have always been the same, which is like maximize freedom, minimize harm. I think the capabilities of our models to understand those boundaries continually improve. And, you know, when I first joined, the model would say, I can't help you with that. Or, you know, this isn't something I'm going to, it would sound really judgmental when you tried to get it to do something that crossed a refusal boundary. And now I think the safety systems team has done a great job with this thing called safe completions, which is basically if you ask the model to do something that trips the safety boundary, it's still going to try in earnest to resolve your request without doing the thing that's actually harmful. So I think the technology is really evolving. Yeah. I write mystery thrillers and I would get frustrated by other models. I actually thought that the open-air models were often best for this when I would say, hey, I need you to explain something that happened, a crime in the past or something like this or get into motive and stuff. I had other models who would just outright refuse. I'm like, well, this is not helping me. And I've seen the models get better at doing that. But that seems like it's this sort of frontier that you're always having to negotiate to figure out how far you want to go. Yeah. One thing I'll say on that is like I I'll always remember like an email that was forward to us where a lawyer was like, I think, asking ChatGPT to proof a sexual assault case that they were working on. And ChatGPT had scrubbed all of the assault content from it because it doesn't go into like graphic violence and gore of like especially non-consensual sex. But for that lawyer, that was like a really terrible thing. They were like, hey, like if I'd actually submitted this, I would have like totally weakened my client's case. And I think there are always I'm a librarian by trade. Libraries deal with access to information. And in theory, like everything humans can talk about and want to explore and any idea should be available in the library. I think the same thing is true for ChatTBT, but it's about finding the right ways to contextualize those rules. So in the case I gave with a lawyer, maybe that makes sense. If it's writing a revenge email to an ex, that's like a very different thing. And so some of this is just advancing the technology so we can handle that level of nuance. And we're always getting better, but there's always more work to do. As these models have improved both in intelligence, I have noticed that they've gotten better as far as, you know, handling bias. And it seems like that was an intentional effort. That's right. We put out a blog post, I think, like a month and a half ago about some of our progress on this. But something that we're really watching for in our models is how they handle subjective domains. And we want to make sure that our models can express uncertainty, that they can take on any idea that the user brings to them and answer those questions in earnest. We'll always stay anchored in objective truths if there is one. And so that's something that users should start to see changing in our models is they should be able to answer these unknown questions in more open-ended ways that allow users to really self-direct where the conversation's going. And then another thing that I think the team has done that's really quite cool is there's a group of researchers and some folks in the model behavior team who've been working on the creativity of these models. And to me, this is a bit of a sleeper feature inside 5.1 in that this model's expressive range is much more wide. Now, of course, we have a natural default that the model has that may not feel that different. But again, if you try to push it to its paces to get it to speak in a really, really elevated way or in a very, very simple way, there's actually a lot more you can do with these models in the creativity space. I think this is kind of what makes post-training really feel like an art because we have all these different types of tasks and capabilities that we're trying to improve on that don't have a ground truth answer. Right. Like if you're trying to just make a model that's really good at math, it's actually not. There's a lot of like answers out there. There's a lot of problems you can do with your clear answers. But when you have these things that are so subjective and it's really dependent on the context and the user and how to like what is the actually best ideal answer here. And so I'm really excited for a lot of this type of work. Yeah, it's cool. I remember early on people say, ah, it doesn't write so well. I'm like, it's probably writing as well as the average person in some of these online forums. And then now it seems like it's just improved considerably. Yeah. And even if you don't notice it on your first prompt, it might be just asking it to change how it writes. And I think that's like also something we need to work on is kind of finding a way in ChatGPT to like tease out these like extended capabilities with each launch. Yeah. where would you like to see behavior going in the future how customizable would you like to make it yeah with the five um one launch um there's a lot of work with trying to give custom personalities to folks um and i think this is actually like a really good step forward we have over 800 million like a weekly active users now and i just think like there's no way that one model personality however you want to define personality can actually be what um can service all those people so i think we do want to be in a world where people, and as the models get much smarter, they are just way more steerable. So like you should be able to get the experience that you want with chat. Yeah. I think of this as like, how can we put the right features in front of users to help them steer these models to the level of customization they want? I think the personality work that we're doing right now is a first step. We'll test, we'll iterate, we'll learn. But there's so much to it. Like, sorry, just another anecdote, but I remember my brother using Pro for the first time And he's a PhD in like biochemical research. And he gave it a prompt and he's like, oh, this is like what an undergrad would answer with. And I was like, can you tell it that you are a frontier researcher in this lab using these sorts of tools on this sort of science and to respond at your academic level? And he did. And he's like, oh, my God, the model just proposed something that my lab just broke through with two weeks ago, but hasn't published yet. And so like these models are insanely powerful. But just knowing how to customize it, even at that level, which was just him opening the opening prompt can be so powerful. And I don't know that humanity has figured that out yet. And so whether it's personality steering or whatever other tools we need to put into chat GPT to help advance human understanding of these models and how to get the most out of them, I think that's the task ahead for us. On a previous episode, I talked to Kevin Wheel, who was heading up OpenAI for science, and Alex Uchovska, who's a scientist working with OpenAI and also a professor at Vanderbilt And he went through sort of the same experience talking about how if you gave it a little bit of priming then all of a sudden the model became much more capable in doing those fields And that kind of what prompt engineering was Prompt engineering was trying to figure out how to steer a base model. And over time, once we understood that people were trying to do those tasks, you could train a model to then not have to expect that first part of it. Do you think that we're going to be moving into that phase now where you're not going to have to tell it, you're a grad student and do this? I think so, especially now with more things Like with model having more like memories of what you are, like who you are in their context. I think as models get more intelligent, I think the model should be able to infer all of these things and like be able to talk to you in the way that makes sense, like for your expertise. That's right. Yeah. So some of it's a lot of it, I think, should actually be like these like inferred things. I think there's probably some level of like steerability. Maybe it's just I think from and this is just my own PM take. I don't know that every PM would agree with me, but I think users should always sort of know what it is we're inferring about them and how it's steering the model so they can always go back and have the tools to change things. So, for example, you can turn it on and off memories or delete them in the settings panel. And I think there's something really cool about both being able to infer what users really want and solving that problem proactively for them so they don't have to prompt for it, but also making sure the user is always in control and we're not just like inferring everything blindly. Could you explain a little bit about how memory works? Yeah. So memory is basically the model will write down things it knows about you based on its conversations with you for it to refer to later. So this is really nice because then you're not just repeating yourself every time. You're not saying, I'm Laurentia, I'm a PM at OpenAI, I work on model behavior. It already knows this because you've already said this to it. And so then it can actually just use that information in future conversations. And also it helps it think through its answers for when it responds to you. It has that context. And I think that really grounds its answer in being the most useful response for you. I have a pulse, which has been amazing. And I get every morning, I get little money updates. And because of memory, it's following the conversations I have. And it creates these little custom articles for me. It's pulling research and pulling other things and showing things to me. And it's just one of the things I never really thought would be a great advantage of having memory. And now I see it's not just when I'm out of a conversation, but it's proactively finding things for me based on it. It's pretty cool. Yeah, I think that's, so neither of us like work directly on that feature, but I think what's cool is seeing how the work that we do upstream, whether it's like building great models or shaping evals around like the capabilities we want, can actually allow our ChatGPT team to go out and build these great features that articulate the power of our models. So, yes, they can learn your preferences, habits. Yes, they can craft great stories for you or find great information based on your interests. And this sort of proactive feature is one way of helping users get the most out of these models. It seems like, yeah, that's becoming a very interesting way to make the models more personal. And when I use something in a mode where it doesn't have memory, it does feel different. It does feel very, you know, cold start. And it's like, well, hello, how are you? And I'm like, where are you? We've been having this conversation. Is this one of the challenges, though, when people are telling you, hey, something feels different, is that they can't quite articulate? Yeah, the hardest feedback is, I guess, an anecdote. And the next hardest feedback is a screenshot of a chat because none of that metadata is really attached to tell us where things have gone wrong. So I actually love the share feature in ChatGPT. When we have one of those links on our side, we can inspect it and see like what sort of context did the model have going into this and what was going on. So we can sort of debug that user feedback. That's a great point is I've had people ask me like, hey, you know, the thing didn't answer it right. I'm like, what model? Like I was using ChatGPT. GPT. And I'm like, okay, we need to kind of dive into that a little bit. And I guess going as far as sharing the feedback or sharing this whole conversation probably makes more sense. What are you most excited about going forward? I think these models are just so incredibly capable. They can do so much and I can't wait to see what people build with them. I can't wait to see what comes next in the chat GPT app. I see so much opportunity. And I think just in general, people are starting to really like wake up and see what you can do so that's what excites me yeah i don't want to like tease too much yeah yeah i'm pretty excited that i forget who tweeted this but intelligence too cheap to meter like i think like we're just gonna have such incredibly smart models out for people and i think i've always said this even when we first launched chat like this is just one form factor of it right like with these smart models there's so many things that could be possible so like like laurence is saying i'm also quite excited for a lot of the different new product explorations that we'll have with these like smarter models. Because I think we kind of saw this with like the progress of LLMs that as soon as we get smarter models, it kind of unlocks new use cases, right? And then I think with new use cases should be new form factors. So pretty excited about that. What advice do you have for users to get the best experience? Mine is, I tell this to people all the time, try, have your super hard questions, things you know really well. I used to be a ski racer. I have a lot of opinions about like how to ski really, really well. And I love to pressure test the model on that to see how it's changing and improving. And the thing is like we're shipping updates all the time. And so it's so easy to say, yeah, I heard it's great for coding. It didn't work. Or I heard it can help me build an app, but I tried and it didn't work. That might be true today, but in three months, it could be a totally different landscape for that user. And so just keep at it, keep playing, keep trying. That's the best way to like get the most out of these models. You can also ask the model to help you come up with a better prompt, which I suggest to my parents. It's gotten a lot better at that. It used to be you'd ask it, how would I prompt it? And the model would kind of take a guess. Like I guess so, but having seen so many examples. Yeah. Yeah. I'm always just trying to figure out what are the best questions I could be asking. I'll ask it like, what questions should I be asking you? Get the most out of it. Deeply personal question. You don't have to answer it. It'd be really awkward if you don't. what is your style or personality choice that you've set for chat GPT? I mean, I'm biased, but I just have it on the default. I mean, it's what we train. For me, so I switch through them all the time. And I think that's like just the nature of my work. I want to understand how all these different settings feel and for all of our users. And so I feel like every second day I'm trying something different. That said, I think the one that just makes me happy to talk to is probably a combination of nerd, which is sort of like a very exploratory response style from the model. It likes to, um, like unpack things. And then, uh, I'm from Alberta and maybe it's just me. Uh, that's, um, a province in Canada. It's like the Texas of Canada. And I grew up with like horses and cows. And so I think there's some part of me that likes getting it to talk to me like a country Albertan, which is great. Except for that, when I go to like, write a professional document and the model says like, howdy. I'm like, oh, great. Like, no, let's take the Albertan out of that PRD. But yeah. Very cool. Thank you so much.