⚡️ Prism: OpenAI's LaTeX "Cursor for Scientists" — Kevin Weil & Victor Powell, OpenAI for Science
OpenAI launches Prism, a free AI-native LaTeX editor designed to accelerate scientific writing and collaboration. The episode features Kevin Weil (VP of OpenAI for Science) and Victor Powell (Product Lead) demonstrating how AI can be embedded into scientific workflows to help researchers focus more on science and less on document formatting.
- AI acceleration in science will follow a similar trajectory to software engineering - from early adopter tool to essential workflow component within 12-18 months
- The key to AI adoption is embedding intelligence directly into existing workflows rather than requiring copy-paste between separate tools
- Scientific bottlenecks will shift from theoretical work to physical experimentation, driving demand for robotic labs and automated research systems
- OpenAI's strategy focuses on accelerating external scientists rather than conducting internal research, aiming to enable 100 Nobel Prize winners using their technology
- The progression from AI solving basic problems to frontier research happens rapidly once models reach 5-10% capability on specific evaluations
"Our goal is not to win a Nobel Prize ourselves. It is for 100 scientists to win Nobel Prizes using our technology."
"I think 2026 for AI and science is going to look a lot like what 2025 looked like for software and software engineering."
"Every million copy and pastes done in ChatGPT, there's probably some product to be built."
"If we're successful, then you end up doing maybe the next 25 years of science in five years instead."
"You go very quickly from this thing is just impossible for AI to do to oh my God, AI does this thing really well."
Foreign.
0:00
Okay, we're here at OpenAI with some exciting news from the AI for Science team. With us is Kevin Weil from, I guess your VP of Science.
0:05
Open AI for Science.
0:14
Yeah, open AI for Science. And Victor Powell, who is the product lead on the new product that we're talking about today. And with me is our new AI for Science host, rj. Welcome. So thanks for having us.
0:15
Thanks for having us.
0:26
It's very good to be here.
0:27
Yeah, thanks for hosting us as well. It's always nice to come over to the office. What are we announcing today?
0:28
So we're launching Prism, which is a free AI native latex editor. What does all that mean? Because probably a lot of people on the POD haven't worked with latex in the past. Latex is a language effectively for typesetting mathematics, physics, and science in general. So if you're a scientist writing a paper, you're probably not using Google Docs because you need to. You have diagrams, you have equations, et cetera. But it's. And it's been the standard for decades. But the tools that people use to actually write latex, write their papers, haven't changed in a long time. And in particular, AI can help with a lot of the tasks, right, because you spend your time doing the science, you need to write it up. That's an important part of communicating your work. But you want that to be fast and you want that to be accelerated. And AI can help in a ton of ways, and we'll talk about some of those. But if you, if you step back, right, it is OpenAI for science. Our goal is to accelerate science, and the surface area of science is very large. So we're trying to build tools and products that help every scientist move faster with AI. Some of that is obviously the work that we can do with the model, making the model able to solve really hard scientific frontier, you know, frontier kind of problems, allowing it to think for a long time. But it's not only that, right? If there was a lesson from what happened over the last year with software engineering, it's that part of the acceleration from in software engineering came from better models, but part of it also came from the fact that you now have AI embedded into the workflows, into the products that you use as a software engineer, it'd be one thing. If we were going back and forth, copying and pasting code between ChatGPT and your IDE, that would be okay, that would be an acceleration. But the real acceleration came when you embedded AI into the actual workflow. And so that's what we're doing here. OpenAI for science. It's both building great models for scientists and also speeding them up by bringing AI into the workflow. That's what we're doing with Prism. Yeah.
0:33
I often say, like, every million copy and pastes done in ChatGPT, there's probably some product to be built.
2:45
Right, Exactly.
2:51
That's a good analogy.
2:52
Yeah.
2:53
That's a good way to look at.
2:54
It, especially with latex. Having written a lot of latex papers.
2:54
Yes, me too. The number of hours as a grad student I spent, like, trying to get some diagram to line up. Exactly. And. Oh, man. Yeah.
2:58
And Victor, this is your sort of baby.
3:09
Yeah. I guess it started off as just a project. I left Meta about three years ago, trying to look for various different projects to start, and this was one that, like, when I sort of presented it to people, they're like, oh, I get it, I see what you're doing. And so I've just been focused on that, building it for about a year and a half and, you know, it has now has become part of OpenAI, and that's been very exciting.
3:13
Congrats. Thank you.
3:40
Yeah. So it's kind of a. Kind of a fun story, right? I mean, we, as we were thinking, we had this thesis around. It's not just models, it's also building models into the workflow and accelerating scientists in that way. And this is. There are obviously a lot of different ways that you can do that, but the scientific collaboration and publishing thing is definitely one of them. And I was looking around like, what is there in this space? And there hadn't been a lot of innovation for a long time. Like, it wasn't that different from when I was, like, writing up, you know, my assignments and papers in tech in grad school. And. And then I found on this Reddit forum, maybe it was our latex, I don't remember, but somewhere on this Reddit forum, I found this thing about a company called Cricket. And I was, like, looking around, I couldn't find who the founder was. It took me a little while. And then I think I found you on Twitter and DMed you out of the blue and just said, hey, I don't know if you want to talk about this, but I would love to talk about this, if you're open to it, and gave you my number and we talked on the phone and then like, jumped on a zoom and eventually met in San Francisco and made it happen. That's right. It's awesome to have you guys here, but it's just. Yeah, I have a ton of respect for what you started to build.
3:42
I actually never heard that full story from you until now.
5:01
Yeah, you gotta find that Reddit user and thank them because, you know, it.
5:04
Might have been me.
5:07
I thought you were totally in stealth because it was the hardest thing to actually figure out who the founder of this thing was. And then I was like, oh, for sure he's not going to respond to my random dm.
5:09
I mean, I guess that's part of our focus, has always just been entirely on product and to the point where it's almost embarrassing how little we focus on anything else.
5:20
Well, it worked out for you.
5:30
Yeah. Also full circle for a moment for you using Twitter to do your business development.
5:31
Yeah, that's right.
5:35
So that's kind of interesting.
5:38
DMs forever, right?
5:40
Actually, yeah. Probably one of the most important social network innovations, I guess, is that stuff. And I'm sure you know a lot about that. Shall we go right into a demo.
5:42
Or talk about it? It's always fun to show it. Okay.
5:51
I'm a fan of, like, show, don't tell. Push people to the video.
5:56
Yeah. All right, I'll try and arrange this so you guys can see a little bit.
5:59
Yes.
6:02
All right, so what you have here, so this is, this is Prism. And what you can see on the left here, this is actual Latex. You can see why you might want AI to help you write it, because it's, you know, a little bit. It's a language, it's a little bit messy. And then on the right, this is my colleague's paper, Alex Lipsaska, he's a physicist. This is a paper that he wrote on black holes. And so you see it over here, all the. All the. You can imagine trying to write this in Google Docs or something. It'd be impossible. This is why Latex is super powerful. And you've got your files here that make up the project tech file, which is the actual main source file, bibliography files, etc. And you can go through and you can change it and then you compile that into the PDF itself. But here I can say this is where at the bottom, you can use the AI using GPT 5.2. And I could say, you know, this introduction, maybe I want a little help writing the introduction. So help me proofread the introduction section, paragraph by paragraph. Suggest places where. Where I can simplify.
6:03
This is a live demo and we're working on it pretty heavily, so just.
7:23
Are you nervous yet?
7:27
You can't be nervous.
7:28
You're good. Spoken like a true founder. And so one of the nice things is you could do this in ChatGPT, but you'd have to go upload your files into a chat, right? You're going back and forth here because the AI is built into the product. It has all of the files that are part of your project. It automatically puts them in context. It works the way you. You think it would work. So here it's looking at the files, all right? And it's given us kind of a diff here. So it's suggesting changes. You've got the. The part in red, which is the part that it's changing, the part in green, what it wants to change it to. And you can see the different places where it is suggesting that we change things. So, okay, we can. We'll just keep all of them. Right. Yolo.
7:29
Hope Alex.
8:18
Here's hope.
8:18
It's all true.
8:19
Yeah, we're changing Alex's paper. What's the big deal? So here's another thing. We were talking about diagrams in latex. So I've got a say. I wanted to input a commutative diagram, right? It's really easy to draw a commutative diagram like this. It is an absolute nightmare to put these things into tech. So I will upload this photo and I'll say here, whoops.
8:20
Is there a tech bench for this kind of stuff?
8:46
Like a set of evals? Yeah, we totally need one. I think there's an opportunity to do that for sure. So here's a commutative diagram that I drew on the whiteboard. Can you make it into a ticks diagram and put it right after the. I don't know, right after, right before, right at the top of the introduction section. Make sure you get the details right.
8:50
So I didn't want to interrupt you while you were typing, but why don't you use voice?
9:28
Oh, actually I should. And I totally could. Yeah.
9:32
No, but isn't it interesting that we all have these voice but voice buttons and we don't use it? Yeah, it's not second nature yet. It's interesting.
9:35
And that one I totally should have. I was going to also show something. So you have the. Here I am in the tech and it's working. You also can create new parallel chats, so you can have whole sessions with ChatGPT that can be going in parallel. So here, I'll ask it. There's all these equations. We're talking about symmetries of this black hole wave equation. And in particular, there's this complex symmetry Here, I like how it sinks. Yeah, yeah. Notice how it syncs when I highlight it. But I'll say, like, why don't you. I'll go to my chat so I can start doing this in parallel. I'll say, please make sure or please verify that the H operator in the new symmetries section is indeed a symmetry of the stationary axisymmetric.
9:43
Do you understand those questions to me?
10:43
I have, but after that, Brandon is.
10:47
Actually a physics person. I'll say, don't do it in the paper, you know, show it here. I don't want it to actually like edit the paper. I just want it to prove it here.
10:50
Right, yeah.
10:59
Okay, so I'll get that going now, while we're waiting for the diagram to finish, we can also get another thing going in parallel. So I'll say I need to write up a set of lecture notes on general relativity. You know, say I'm a professor, right. I've got. I'm teaching a class or something. Put together a 30 minute set of lecture notes on Riemannian curvature. Wow.
11:00
That's a very different task.
11:34
Put it into the file. I made this GR lecture tech. Okay. And so I've got this going. All right, well, it came back on my earlier one. H symmetry. Is it really here? You got ChatGPT doing a whole bunch of work to verify that this is indeed a symmetry of the equation. Okay. It does. It confirms it. Right. So you've got the full power of a reasoning model that can think deeply about frontier science. And now we can go back while it works on the other thing. Okay, so this was where I was making the diagram. Right. Put it right below the introduction. I'll compile it again.
11:35
So it's an auto compile.
12:18
Actually, you can turn that on.
12:20
Yeah.
12:21
Okay.
12:22
And look it nailed it. So it looks like it got it pretty much exactly.
12:22
Just a small check the details.
12:28
Oh, yeah.
12:30
Good enough for me.
12:32
Yeah, it's pretty good, but. All right, we can see if it'll get it right. Let's say the C vertex should be directly to your point about voice.
12:33
Though I do think maybe over time the code kind of might recede into the background more as you're just really interact. You're interacting with the paper.
12:44
Yeah. You're vibrating.
12:52
Having a conversation with it.
12:53
Yeah. When you started this product, is this how you're envisioning it would be used or were there other design choices that you were considering and you didn't take that path?
12:55
By the way, before you answer, we have our general relativity lecture Notes here.
13:05
That was quick. So 30 minutes.
13:09
This is six pages. Yeah. So 30 page, 30 minute section. Okay, so we got curvature covariant derivatives. This looks like a reasonable set of notes if you were going to go teach a class. Right. It just did it for you.
13:12
Or you can think like, you know, generate the problem set for this week.
13:27
Yeah, right. You've got work. So it's got some examples here. We could tell it to like work out solutions to the examples.
13:30
That's sort of a hidden feature of latex too, that it actually makes it pretty easy to generate problem sets with like answer sheets and things like this. There's so many cool features of late tech that I think are underutilized.
13:36
Yeah. So anyways, you could see we had it proofread the paper, we had it check some of the answers to verify that our calculations were correct. We generated a set of lecture notes, we added a diagram that we didn't have to actually type up ourselves, which I promise you is horrendous. And that's just, we did that all basically in parallel. And you know, you can imagine lots of other things. You can you, if you have a proof that you, you know, you maybe have like the, the sort of the bullet points on a proof, you can just say, here are the bullet points. Now flesh it out for me. You can imagine having it check all of your references before you publish, make sure all of them are real up to date. You can imagine having it generate your references based on the topic of your, you know, so like there's so many areas where AI can help.
13:48
That's a big problem when you're trying to put together a paper is get all the references.
14:33
Right.
14:37
Yeah.
14:38
Okay, so, and all of this is time that used to go to, you know, typing up a paper, not science and not science, and now it could go back to science. And that's just one of the ways that we look at accelerating scientists all over the world.
14:38
Yeah, I would say definitely be careful about including references you haven't read.
14:52
Right.
14:57
That's the point. Like you can equate 100 references, but if you didn't read them, then you might as well not have them. But yeah, I think that web connection is very important. And is this stock GPT 5 or.
14:58
It's GPT GPT 5.2. And by the way, when you're looking at references, you can also ask ChatGPT to help you understand the reference. Read this paper, tell me the relevance. So all of the things that you might want to do to accelerate your Work you can just do from within this interface.
15:10
You still have to do your work, but it should make it faster, especially like even linking to the references so you can go and verify like okay, this is this one.
15:27
So this might also make it easier to write the paper as you do the work.
15:35
Right.
15:39
Rather than, rather than, oh, okay, now I gotta spend two days in latex land.
15:40
Yeah.
15:45
Like trying to get my paper right.
15:46
Like a tool for thought rather than.
15:47
Just a publishing tool. Yeah, yeah, yeah, yeah.
15:49
What about collaboration?
15:50
That's a great. Yeah. So it's built for, for. I mean you can speak to this. Well, it's built for collaboration so you can bring on as many collaborators as you want, which is nice. I think most other tools in the space have hard limits and charge you money and other things. In Prism it's as many collaborators as you want.
15:52
For free commenting.
16:10
Yeah. So you've got commenting, you've got all the kind of collaboration tools that you would want.
16:12
Good.
16:17
And then any other engineering choices like what might engineers not appreciate when just looking at a tool like this? Often it would be like multi line diff generation that you need to do because you're editing a pretty complex document.
16:18
It does get pretty complicated. I mean we're using. Let me know if I'm getting too technical into the weeds. But we're relying heavily on the Monaco JavaScript framework.
16:32
I'm very familiar with the lack of documentation of Monaco.
16:43
That's actually, it's interesting you say that because it's, it's very true. It's extremely powerful library that is almost entirely undocumented.
16:46
But you can use Codex now to generate the documentation for you.
16:55
Yeah. You think Microsoft should get on that. But yeah, just stuff like that. I like to hear about the behind the scenes of building something like this. What do you struggle with? What's the model surprisingly good at and what's the model it should be good at, but it's not.
16:59
What were some of the hardest problems as you were building this in the first place? What are some of the hardest things to get right?
17:13
I think initially maybe one interesting challenge was that we really pushed on it being webassembly and fully just running in the browser at first, the whole entire latex compilation. And that did help us in the sense that we were able to flesh out the design and the AI capabilities early on without having to invest heavily in the backend infrastructure. But eventually we did hit a wall with that approach and once we switched it to backend PDF rendering, that's when we really started to Hit an inflection point with usage.
17:19
Fast. Yeah, I think we also. The AI in here benefits a lot from everything that we've learned building codecs. And as we go forward, I think we'll likely just integrate the full codecs harness into the application here. So you get all the benefits of the tools and the skills and all the things that codecs can do today and you just sort of automatically can bring that into your environment here.
17:48
Yeah. Is there a future? They're just the same app?
18:13
Maybe. I think potentially, it depends on. I mean, here's the reason I'm hesitating is I think the interesting thing with this and with codecs is we're still mostly in a world today where people are. You have your main screen is your document and then you have your AI on the side. But the more that AI improves, people trust it and they're just yoloing it. Right. You're generating code and you're looking at the code is sort of secondary to instructing the AI and driving from that. The UI probably changes for all of these things. Right. You don't need your document front and center because you're actually not looking at your document as much. That's sort of your backup and your interaction with your AI is primary. And as that happens, I think these UIs can kind of converge over time. So we'll see. But I definitely would love to see a world where people needed to spend less time thinking about the actual syntax and much more about what they're trying to create. Yeah.
18:15
I mean, I feel like this plus a notebook would be amazing. And something that AI can run, equate, run an analysis, generate plots. Oh, stick that in the paper here. Like. Oh, read, you know, like this paper, like this part of the paper. Like, take that equation and like, you know, do something with it. That would be a really amazing integration.
19:21
Yeah. Like think through the different corollaries of this thing from this paper and produce some alternatives and then like. Yeah, I completely agree. Yeah.
19:46
Yeah.
19:55
I do think that's sort of the progression where it's like doing. Doing maybe work for a few seconds versus maybe we're already at a point where it's doing work for a few minutes, eventually doing work for hours, days, coming back with very complicated analysis.
19:55
Yeah, I mean, that's actually maybe a good segue into some of the other questions that I had about your initiative. So, stepping back to AI for science in general, can you talk a little bit? I have a million questions, but maybe start with what? Okay. I feel that validation of AI for science is critical to its success. Right. You have to have some sort of real world validation of the results that you produce with your AI. Right. So what are the, I know that there's been some publicity in the past. What are the like the latest and greatest hits of the things that big labs or any lab is doing with OpenAI's AIs?
20:09
I mean when you step back and look at the trend, I think that's the biggest thing because we can debate exactly. Like you've probably seen in the last few weeks even there have been a bunch of different examples of like GPT 5.2 contributing to open air dish problems and things like that. And then you get into this debate of well, was it really just really good at literature search and it found an example over here, an example over here. When you combine the two, you know that it was sort of a trivial step from there to the solution and was that novel or did it really do something new? And you know, that's a, that it's a legitimate discussion. But when you step back two years ago we were like, you know, this thing can pass the sat, that's amazing. And now it's solving and then you progress to like, it can do a little bit of contest math and it can start to solve harder problems. Wow. And then you keep going and it's starting to solve graduate level problems and then you have a model that gets a gold medal at the imo. And now we're sitting here talking about, you know, it solving open problems at the frontier of math and physics and biology and other fields. So it's just, I mean the progression is incredible. And if you think about where we are today, then you fast forward 6 months, 12 months. I'm very optimistic about what the models are going to be able to do to accelerate science. It's like it's already happening. And if there's one thing that I've learned From my like 2 ish years at OpenAI, it's you go very quickly from this, this thing is just impossible for AI to do. Like, it's too hard, AI can't do it to. Like AI can just barely do it and it like kind of doesn't work. And you know, only early adopters are doing it because it's not particularly reliable yet, but it sort of works to, oh my God, AI does this thing really well. And I could never imagine not using AI for this in the future. It's like once you start to get to 5, 10% on some particular eval, you very quickly go to like 60, 70, 80. And we're just at the phase where AI can help in some, not all, but in some elements of frontier science, math, you know, biology, chemistry, etc. And it just means we're like right at the, at the cusp and it's super exciting.
21:02
So I mean, so it, fast forward a year.
23:26
Yeah.
23:29
You know, the end of the year and we have AIs that can do a lot of this discovery process. Then the bottleneck becomes the wet lab or the, the lab.
23:29
Right.
23:39
So what, what, what is, what are you seeing in that domain?
23:39
Yeah, I, I, by the way, I totally. We were talking a little bit about software engineering before and the analogies. I think 2026 for AI and science is going to look a lot like what 2025 looked like for software and software engineering. Yeah. Where if you go back to the beginning of 2025, if you were using AI heavily to write your code, you were sort of an early adopter and like kind of worked and, but it wasn't, certainly not everybody was doing it. And then you fast forward 12 months and at the end of 2025, if you are not using AI to write a lot of your code, you're probably falling behind. I think we're going to see that same kind of progression in AI and science today. It's early adopters, but you're really starting to see some proof points in solving open problems and developing new kinds of proteins and things like that. But you're right. As it really starts to work and I think this is the year that it's really going to start to work. It shifts the bottleneck and I think we're going to be starting to talk a lot more about robotic labs and other things. Do you need to have a grad student pipetting things? No, probably not. Right, Right now you do. But why shouldn't we have robotic labs where you have AI models doing what they do best, reasoning over a huge amount of different information? You know, they have read substantially every paper in every field and can bring a lot of information to bear to help prune the search tree on, you know, a new material, for example, that you're trying to create. And then you have a robotic lab that can roll out a bunch of experiments in parallel, do them while we sleep, and then feed the results back into the AI, let it learn from them, design the next set of experiments and go. It's like, I mean it's hard to imagine. That's like, it doesn't even have to be yellow science. Right. To Your point? You're verifying it as you go because you have an actual lab building it in real life. But you can just do so much more in parallel. You can think harder up front with AI to design the experiments. And again, like prune the search tree so you're, you're searching over a smaller number of higher value targets. And then you automate the experimentation and then turn it around faster. And again, like this is acceleration. Like the whole. If we're successful, then you end up doing maybe the next 25 years of science in five years instead. So in 2030, we could be doing 2050 level science. And that would be an awesome outcome. The world is a better place if that happens.
23:43
Absolutely. We spoke recently with Heather Kulik at mit, and one of the things she pointed out was that there's an element of serendipity to working in the lab that you lose. And so she was of the opinion that there's a class of problems, especially when you have like a large search space or something like that, where robotics is going to really accelerate science. And there's another class of problems where even experimental science will not move forward very fast because of robotics. And so then again, you're at a bottleneck. But I guess humans need something to do, so.
26:24
Well, what she said sounds totally reasonable to me. Right. There are probably places where the humans are adding no value because they're literally just trying to pipette a certain amount of a thing and do another thing or do the same motion repeatedly in a bunch of different ways. And then there are places where it's less well understood. You want the full flexibility that you have a really smart human thinking about the work that they're doing. By the way, the same is true in the more theoretical fields as well, where this isn't about let's automate all the humans out of their jobs. This is about accelerating scientists. It's scientists plus AI together being better than scientists alone or AI alone. And I think the same is true whether you're talking something that's happening in silico, proving a theoretical problem, or happening in the real world with a lab, like find the parts that you don't need a human to do and try and automate them as much as you possibly can so that the humans can spend their time on the most valuable things.
26:55
Yeah, I'm very pro. Like the, in cyclical acceleration, because obviously you have more control over that and you can parallelize and repeat and do all those. All those things.
27:54
Yeah, I think there will be a huge amount of Value in, you know, a lot of fields are heavily simulatable and they, you know, and so, you know, nuclear fusion, for example, they're running a lot of simulations before they do any particular experiment because the experiments are very time consuming and expensive. Yeah. But I'm excited to see what you can do when you have a loop between a very intelligent reasoning model that understands fusion and a simulation and you get the model thinking about what parameters to set for the simulation and then running a bunch of simulations in parallel, feeding that back and you have that same sort of lab loop except it's all in silico in been running on a giant GPU cluster. Yeah. And then when you really have like gotten to the end of that calculation, then you go run it in irl.
28:03
This is bringing it back to prism. This is sort of a nice aspect that you're, you're getting a more sophisticated view of your result.
28:55
Right.
29:02
Instead of just, you know, like a chat output. And I would, I would hope as it develops is a way for a scientist to be able to interact with the information before you kick off your nuclear fusion experiment for $10 million or whatever.
29:02
And the human can learn from more things, you get more data that you can look at and evaluate.
29:19
By the way, this fusion discussion makes me think if one day OpenAI for science, it gets serious enough and starts to self accelerate, you should solve cold fusion and be your own power source.
29:27
Well, this is why we're so excited about this. Imagine our mission is, is to bring AGI to the world in a way that's beneficial to all of humanity.
29:40
It's right there at the lobby.
29:51
Yeah.
29:52
You see it every day. You walk in, you see it.
29:53
Yeah, absolutely. Imagine if we had GPT9 inside of ChatGPT today. It would be awesome. You could do lots of things. But if you had GPT9 and it could, which I'm using as a stand in for AGI. Right. And it could create new materials and the devices we were using were all incredible and you know, had 30 day battery lives and things like that. And we had personalized medicine and we all knew someone whose life was saved because we were developing personalized, you know, cancer treatments and things so much faster. Like that's the real benefit of AGI. That's I think maybe the most tangible way that we're all going to feel AGI as it starts to be real.
29:56
Yeah.
30:38
And that's why this work is so mission driven for us.
30:39
So, so that does. It brings up like kind of two questions in my mind. One is the first one is, so then who. Who owns the invention? And then the other half of that is, okay, so then does. Does OpenAI become a drug company and a fusion company and. Right. Because this is how. I mean, you laugh, but it's a little bit serious that all the AI for drug discovery companies ended up being drug companies because they couldn't sell the. So far, with some exceptions now with, for example, but they end up being drug companies because they can't sell the drug. But in any event, that there's like a lot of precedence for using. Basically building your own portfolio using AI. So are you thinking about that angle or this is. Right now you're just. Let's get. Let's enable scientists for. Outside of OpenAI. Yeah.
30:42
I mean, my personal belief about as we drive towards AGI is not that we're going to create AGI and then we're all going to sit back and enjoy our universal basic income and write poetry. The future will involve especially advanced science, is going to involve experts helping to drive these models. I don't believe that any one company is just going to do everything. It's why we're focusing first and foremost on accelerating scientists outside of these walls.
31:34
Right.
32:09
Our goal is not to win a Nobel Prize ourselves. It is for 100 scientists to win Nobel Prizes using our technology. And at the same time, I think there are, there are like places where sometimes you actually, when you're trying to build for other people, you learn best if you actually try and go end to end on something.
32:10
Yeah.
32:26
Because then you're your own customer and you get. You understand it in a tighter loop than you would if you were purely building for people outside the walls. So I think it makes sense for us to take a handful of bets like that. But by and large we're going to partner because the surface area of science is massive and we want to accelerate all of science.
32:26
Yeah, yeah. We're covering all sorts of disciplines from like chemistry to structural biology.
32:45
And we're releasing the first episode this week.
32:51
So material science, it's all over the place. There's a lot to do. One thing I did wanted to bring across also was so AI for science sits within the broader sort of research org@OpenAI. And you know, one of the more, more interesting things is like self acceleration, let's call it. You know where Yakub has very publicly declared that we'll have an automated researcher by September 2026.
32:54
Yeah. The beginnings of one, I think you said. Right. And it's like the intern version this year.
33:19
Right.
33:23
First product. And I'm sure you have more cooking internally. But like, why so soon? Like that's eight months away. And what's the, what the goal there? Just anything above that that you can share?
33:23
Yeah, I mean, eight months, that feels like forever in this industry.
33:34
Ati by then, basically infinite time.
33:38
I mean. No, it's exactly what you said, right? It's if, if we can, if we can create a model, an AI researcher that is, that can actually do novel AI research, then we can move way faster. Right. We will, we will self accelerate. We can discover more things quickly. We can apply GPUs and compute to moving our own research faster. And that just means that we can improve our models at a faster rate. And every bit that we improve our models means that we are a step closer to bringing AGI and all the things that we were talking about with personalized medicine and new materials, and we can bring these amazing things into the world faster. So it is about self acceleration.
33:41
Yeah.
34:23
I think one thing I'm also trying to figure out is how closely is machine learning research, which is a science or high performance compute, which is also something that you guys are doing a lot of close to the traditional hard sciences, let's call it like physics and chemistry.
34:24
I think in a lot of ways it's sort of a parallel effort to this. It is the work that we're trying to do with AI OpenAI for science and accelerating other scientists. The parallel internally is they're trying to build products and models for AI researchers to accelerate them. So there's a lot of sort of parallelism to these two work streams. They're similar in goal just for a different set of users.
34:42
Yeah. Okay. Any parting thoughts, questions? Anything we should have asked?
35:11
Well, I hope everybody tries Prism. Right? It's. It's available today@prism.OpenAI.com it's totally free. You log in with your ChatGPT account and you can go build anything you would like.
35:15
We're really excited to see what people use it for. And if you run into issues or have any feedback, let us know.
35:29
I have a paper I'm gonna write really, really soon on that.
35:34
What are show notes in this thing? I don't know. Let's see what it does in latex.
35:39
Yeah, totally.
35:42
Yeah.
35:43
Congrats on your first OpenAI launch. There you go. Congratulations.
35:44
Congrats. Thanks for having us.
35:47
Yeah, thank you.
35:48