Can We Trust AI? Intention, Ethics & Future of Intelligence – Live From SynthBee
67 min
•Dec 16, 20254 months agoSummary
Live from SynthBee headquarters in Florida, the AIXR Podcast explores AI ethics, intentions, and the future of intelligence through discussions with neuroscientists, philosophers, engineers, and AI researchers. The episode contrasts centralized AGI development with distributed collaborative intelligence models, examining trust, interpretability, and the role of human values in AI systems.
Insights
- Centralized, large-scale AI systems may be fundamentally incompatible with localized human ethics and cultural values, requiring distributed intelligence architectures instead
- Trust in AI requires four key elements: transparency (open weights/data), interpretability, visibility, and testability—properties that become impossible at certain system sizes
- The shift from 'capability' to 'compatibility' is critical; AI systems should be measured by how well they coexist with humanity, not just raw intelligence
- Quantum effects and indeterminacy may play a role in biological decision-making and consciousness, with implications for how we model AI intentions
- Disney's $1B investment in OpenAI mirrors Yahoo's strategic mistake with Google—handing control to systems the company doesn't fully understand
Trends
Shift from AGI-focused development to collaborative intelligence models based on distributed, biological principlesIncreased focus on AI interpretability and the 'black box' problem as a prerequisite for trustworthinessIntegration of neuroscience and cognitive science into AI safety and alignment researchRegulatory divergence: computational democracy (transparency, accountability) vs. computational autocracy (centralized control, opacity)Growing concern about AI training on AI-generated content, creating feedback loops and homogenization of model outputsQuantum computing applications in hydrology and climate prediction for handling uncertainty and extreme eventsEducational shift toward collaborative AI use rather than replacement of critical thinkingEmphasis on local, culturally-adapted AI systems over one-size-fits-all global modelsRenewed interest in intention detection and consciousness studies as foundational to AI alignmentCorporate consolidation of design talent (e.g., Sam Altman acquiring Jony Ive) signaling shift toward human-centered AI interfaces
Topics
AI Ethics and AlignmentCollaborative Intelligence vs. AGIAI Interpretability and Black Box ProblemNeuroscience of IntentionsConsciousness and Self-Awareness in AITrust and Transparency in AI SystemsComputational Democracy vs. AutocracyAI Regulation and Cultural RelativismQuantum Computing ApplicationsHydrology and Climate Prediction with AI/MLAI in Education and Critical ThinkingJailbreak Testing and AI DeceptionUser-Generated Content and IP LicensingAI Model Training Data and BiasHuman-AI Compatibility and Interface Design
Companies
OpenAI
Received $1B investment from Disney for licensing 200 characters for Sora; discussed as dominant AGI player with cent...
Disney
Announced $1B investment in OpenAI and licensing 200 characters to enable user-generated content via Sora; strategic ...
SynthBee
Stealth-mode company pioneering collaborative intelligence architecture based on distributed, biological principles; ...
Anthropic
Claude AI model noted as having softer, more ethical approach compared to other large language models
Google
Historical reference: powered Yahoo's search, leading to Yahoo's decline; parallel drawn to Disney's reliance on OpenAI
Epic Games
Made deal with Disney for metaverse development, enabling IP manipulation and user-generated content
Alibaba
Referenced as Yahoo's most valuable asset, demonstrating importance of strategic investments
Grok
AI model compared favorably to Sora in visual generation tasks; discussed as alternative to OpenAI's offerings
Waymo
Self-driving car technology cited as example of AI improving daily life efficiency
Frog Design
Design firm with Apple heritage; influenced Jared Picklin's career in product design and human-centered computing
Magic Leap
Immersive computing company where Jared Picklin worked on bleeding-edge interface design
Lawrence Livermore National Laboratory
Research institution where Reid Maxwell began career studying neural networks and groundwater modeling
Chapman University
Host institution for Intentions, Agents, and Beings conference; home to Uri Maoz and Aaron Sugar's research
UCLA
Chancellor Julio Franken discussed computational democracy vs. autocracy in AI; major research and healthcare provide...
University of Arizona
Laura Condon's institution; researches hydrology and climate change impacts on water systems
Princeton University
Reid Maxwell's institution; researches groundwater modeling and AI/ML applications in hydrology
Harvard University
Gabriel Kreiman's institution; researches computational neuroscience and AI alignment
Duke University
Walter Senator Armstrong's institution; Kenan Institute for Ethics researches applied AI ethics
Simplex
Co-founded by Paul Rikers and Adam Shai; developing methods to understand neural network internals and interpretability
People
Charlie Fink
Host of AIXR Podcast; leads discussion on AI ethics, Disney deal, and SynthBee conference insights
Ted Shilowitz
Co-host of AIXR Podcast; participates in discussions on AI models, quantum computing, and technology trends
Roni Avovitz
Co-host and SynthBee founder; explains collaborative intelligence philosophy and AI architecture principles
Uri Maoz
Neuroscientist at Chapman University; researches intentions in humans and AI; keynote speaker on AI consciousness
Walter Senator Armstrong
Ethics professor at Duke University; discusses applied AI ethics, cultural relativism, and regulatory approaches
Julio Franken
Chancellor of UCLA; discusses computational democracy vs. autocracy and university's role in AI development
Reid Maxwell
Princeton professor; researches groundwater modeling, water depletion, and AI/ML applications in hydrology
Laura Condon
University of Arizona professor; researches hydrology, climate change, and extreme weather prediction with AI
Gabriel Kreiman
Harvard professor; researches computational neuroscience and AI alignment for human-AI compatibility
Aaron Sugar
Chapman University psychology professor; researches consciousness, perception, and spontaneous voluntary action initi...
Paul Rikers
Co-founder of Simplex; developing geometric methods to interpret neural network internals and understand AI black boxes
Jared Picklin
Chief Product and Design Officer at SynthBee; former Frog Design and Magic Leap designer; discusses human-AI compatib...
Danny Maoz
12-year-old student and son of Uri Maoz; discusses consciousness, awareness, and AI training methodologies
Sam Altman
OpenAI CEO; acquired Jony Ive's design enterprise for reported $6B; discussed as shaping AI interface future
Jony Ive
Former Apple design chief; acquired by Sam Altman/OpenAI; expected to influence AI product design and human compatibi...
Steve Jobs
Historical reference; symbiotic relationship with Jony Ive cited as model for Sam Altman and Jony Ive partnership
Quotes
"Anything that is of a certain size that is closed, whether it's closed data or closed-weights closed data of a certain size becomes unknowable and therefore is untrustworthy."
Roni Avovitz
"We're not following the same architectures. There's a certain size limit based on human cognitive ability, our ability to understand and manage and control that I think many of these systems, including Claude, have possibly exceeded."
Roni Avovitz
"If you build it, everyone dies. Our hope here is that don't build that thing where everyone dies, but there's other things you could build perhaps where people flourish."
Roni Avovitz
"The shift from capability to compatibility is critical. AI systems should be measured by how well they coexist with humanity, not just raw intelligence."
Jared Picklin
"If we're not compatible, it's okay to have a kill switch. We don't need to argue about extending human rights to it. We can just turn it off when it starts invading our space."
Jared Picklin
Full Transcript
This episode of the AIXR podcast is brought to you by Zapper, the folks behind MatterCraft, the leading visual development environment for building immersive 3D web experiences for mobile headsets and desktop. MatterCraft combines the power of a game engine with the flexibility of the web and now features an AI assistant that helps you design code and debug in real time, right in your browser. Whether you're a dev, designer, or just getting started, Mattercraft speeds up your workflow and brings your 3D ideas to life faster than ever. Start building smarter at mattercraft.io. Welcome, everybody. I'm Charlie Fink with Ted Shilowitz and Roni Avovitz. We have a special show for you today. We are live for episode 269 at Synthby headquarters today in Florida, in South Florida. And today is the 11th of December, 2025. So welcome to the show, everybody. We're mostly going to be talking about the conference or the workshop we've had here at SynthBee. Before we jump on that, there isn't a whole lot of news this week, but I did want to mention the blockbuster announcement this morning. Yeah, we were texting about it. As Rody was doing seminars. Disney is putting a billion dollars into OpenAI. That should allow them to run for a few more days. Yes. And they are licensing 200 characters to open AI who you will now be able to prompt using Sora. So we're going to have a lot of Disney shorts. Do you remember two weeks ago we were talking about this on the podcast? We were indeed. Right? Because Disney has been so liberal about letting people do AI stuff with Stormtroopers. And so it seemed like they're kind of embracing this idea of user-generated content. And we know that they're making a deal with, they made a deal with Epic Games for the metaverse, for the Disney metaverse. So obviously that's going to include people manipulating their IP as well. But I think it's sort of a, if you can't beat them, join them kind of attitude. And listen, Disney now has hundreds of millions of fans making content for them. Yeah, we talked about it a couple of weeks ago. Mashups are an important part of culture. They're an important part of user engagement, of fandom. and leaning in, leaning out. Disney has chosen to lean in with trepidation, of course, right? Now, there's always these moments of trepidation, but Disney is fully leaning in. This is sort of an interesting technological step in how they fully lean in, and they made a choice to make a deal with Open AI. Can I give you the counterpunch? Please. All right, so does anyone remember when Yahoo was powered by Google? Yes. Yes. And that was the beginning of the end of Yahoo. It was an older paradigm that did not understand the future, but Google did. And Yahoo's vision of the future was incorrect. And the only thing that had a value was their ownership of Alibaba. That was the smartest thing they ever did. But Yahoo powered by Google, to me, is like Disney powered by OpenAI. They have handed the keys to the kingdom to a system they don't understand the way that Yahoo misunderstood Google. And it was almost the destruction of Yahoo that followed that. So I'm very not positive that this is a good move for Disney in the end. It's a very good move for open AI. I actually do think it's a good move for Disney, and I'll tell you why. Disney has no presence on social media, none. And they're getting killed. They're getting killed on social media. They're getting killed by cut scenes. They're getting killed by the massive audience that has fled their scripted content for the scrolling experience of social media. So I think that's wise of them to start making content for the audience where they are now. Charlie, agree, but why through them? Who else is there? Grok? Well, an interesting sort of visual critique of the models that allow this fandom to sort of exist in this narcissistic, I want to be in a movie with Mickey Mouse. I actually think just in my own personal tests of the different AI tools, Sora never reaches the top of the list for me. I'm curious. We have an audience here if it does. Interestingly enough, when I use Grok and Sora, Grok always wins in my visual estimation of a better tool to do the job. I find it better. I find its results better. I find its deliverables better. I love Sora and I love the way it hallucinates. And so that's going to be really interesting, right? We'll see how much wide birth they give to hallucination with Disney characters, but I can't imagine what Sora is going to imagine for Mickey and the gang, given its knowledge of popular culture. And here's an interesting question. They make a deal with Sora, but does it open the door for other AI models to walk through that door and start to build anything around that IP? Is it restrictive? So if you build it through an open AI tool, is that okay? But if you build it with another AI tool, is that not okay? And who gets to make that decision? And here's the twist, right? All the AI is being trained on other AI. Yes. So we saw that with all the ticks, like the negative framing that open AI likes to do. So it's not this, but it's that. Well, I don't care what it is. I only care what it is. But all of a sudden, you saw that negative framing, contrastive negative framing everywhere. And it was because Rock and the other AIs were being trained on the majority of the content, which was coming from open AI. Here's the fundamental question. why did Disney pay OpenAI a billion dollars? OpenAI is valued a lot more than Disney, has more raw capital than Disney, is probably going to be worth a trillion or two in a few years. I think it's an investment in OpenAI. Yes, it's an investment. So they are essentially seeing the future and making a choice to capitalize on that. They're not a paying customer. They're an investor. Also, OpenAI is going to go public next year at like a trillion dollar valuation. So I think Disney is actually getting a pretty good deal there. I'd like to put a billion dollars in right now myself. So we're at SynthBee. So much for the news. Let's talk about SynthBee. And the first question I have, Roni, is for you. We've talked a lot about AI and different ideas about the invisible layer and some of the ethics around AI and how it could influence our future. But one thing we didn't talk about is SynthBee. We're sitting here at SynthBee headquarters. So help listeners. And actually help me. I've been here all day. Help me. What does SynthBee do? Everybody has been asking me, what's Roni up to? What does SynthBee do? And I'm like, it's like the invisible layer. I have actually no idea. Well, and the company still is in stealth. So Roni will give us as much as he possibly can. I'm going to eke out as much as he possibly can about this. Roni loves secrets. Okay, go ahead. Okay. First of all, at least let me describe the conference we're at. So really thankful for Dr. Uri Moos from Chapman. And the conference is called Intentions, Agents, and Beings. We've had a wonderful group of professors and philosophers and scientists and some industry folks here. And we've named it Rotra 1, Remnants of the Rebel Alliance 1. As folks trying to band together for a more ethical future of computing and intelligence and people coexisting versus one, unlike the book, if you build it, everyone dies. Our hope here is that don't build that thing where everyone dies, but there's other things you could build perhaps where people flourish. So that was the goal of the conference. In terms of asking me about SynthBe, I will say this. SynthBe is short for synthetic being. We see a huge difference between a being and an AGI model and beings and agents and compatibility with human beings. And we do have synthbis today that collaborate. So we are pioneering something we call collaborative intelligence, which is an opposition to artificial general intelligence. And our view is collaborative intelligence follows more of how physics and biology unfold, where you never see intelligence or mass or energy in one giant thing. It's always distributed in differentiated clusters, in ecosystems and biomes that work together, collaborate together. For example, like pods of orcas, swarms of ants or bees, deers, wolves hunting together, flocks of birds. There's no giant bird the size of like the moon flying around. It's always somehow distributed. And to me, the most interesting thing today is the philosophical underpinning of synthase. we had a couple of guests talk about their push to understand structures and intelligence that seem to be independent of model or even computing or biology. So we asked the question, is intelligence a fundamental property like matter and energy? And is there going to be a physics of intelligence? SINPI is actually founded on that idea. And if that's true, physics dictates sort of size and distribution in a particular way. So we think AGI is an unnatural push towards systems that will collapse, like a nuclear power plant that will crash. So we're built on the idea that biology, physics follow certain laws, and we think intelligence potentially follows that law. So I don't particularly have a dog in the hunt here, and I don't really have a personal opinion on this. I'm a very sort of neutral user of many different AI tools and try and find their functionality and benefit across the spectrum. There's a lot of discussion of the different philosophies of the different large LLM applications. And the one that seems to have the softer, nicer approach is Claude, Anthropics product. How would you say SynthB differs or contrasts itself from the different large AIs? So we're talking OpenAI, we're talking Gemini, we're talking Claude, we're talking Grok, maybe one or two others. How does SynthB stack up against that? And would you even consider that the competition or is it a completely different world that you're diving into? Yeah. And I do want to get our guests into the hot seat. So I would say, yes, fundamentally different. Bottom line is, I think they're building an AGI with the idea that it could be benign. And one of my take-homes from the conference today is anything that is of a certain size that is closed, whether it's closed data or, you know, open-weight closed data, closed-weights closed data of a certain size becomes unknowable and therefore is untrustworthy. So the idea of trust came up and what are the things that you need to create trust? And I think four things popped out of my head today from the conference. One is the idea that you should have the transparency of open weight, open data. But then there's like interpretability and visibility, and there were different discussions on that. And there was also like, I won't get into it, but Blade Runner type tests that you can sort of inquire a model. And I think the thing that SINPI does is we also believe that it's not just a model, it's an architecture that makes something else. We're not following the same architectures. There's a certain size limit based on human cognitive ability, our ability to understand and manage and control that I think many of these systems, including Claude, have possibly exceeded. So our ability to manage and control these things, you don't want to do that. So I think even though you might have the intention of managing a herd of lions, I don't think you can. So our view is don't breed the lion, make a very friendly golden retriever. Should we bring up one of the guests at the conference here? So my name is Uri Maoz. I'm faculty at Chapman University and have visiting positions at UCLA and at Caltech. I'm a neuroscientist by training, cognitive and computational neuroscientist. And I specialize in intentions and trying to understand intentions in humans and more recently trying to understand to what extent AIs have intentions and what those intentions are. So the topic of neuroscience came up a lot over the last four or five hours, which I think a lot of our listeners and a lot of people that use and function in some fashion with AI is not top of mind. They're not thinking about brain science when it comes to their daily use of typing in a prompt and getting an answer back and kind of functioning with their day. Why is neuroscience as a field of study so critical to the trajectory of artificial intelligence and in many forms. And what are the risk factors and the fields of study that you are engaging in that relate to neuroscience? Why is it so critical? So what's happening under the hood when you ask ChatGPT a question is there's a very big neural network that your question goes through. And that is really what's powering its amazing capacity to answer your question, to interact with you, to become, for some people, their friend, I don't know, their lover, and so on. And neural networks have been the thing that neuroscience has been studying for decades. In particular, the neuroscience of intentions, we have developed various insights and models and tools to investigate the neural networks of the brain and various synthetic neural networks that are models of the brain. So we think that some of these tools and insights will give us insight into understanding the intentions of an AI because we know that sometimes, or at least we know AIs can and sometimes do deceive their users. So if they seem all nice and friendly, there's a question, is that really their intention right now or are they doing something else? And I would, I'd argue that we want to know. So what was an interesting subject matter today, one of the groups that came up, a nonprofit, was talking about the sort of unknowable science of an AI model, the largest AI models that we were talking offline. Imagine you had a piece of string and you put that string in your pocket and you walked around for a day and you put your hands in your pocket a few times. And at the end of the day, you took that string out and it was this crazy knotted thing. that you as a human were simply not able to unknot, right? Well, let me put this another way. Go ahead. So, you know, I teach AI and I have this big slide and I show the input layer and the output layer. And in the middle, we call it the invisible layer. And so, of course, the students say, well, professor, what is in the invisible layer? And I say, no one knows. Imagine that we took a big package of grape nuts and we poured it into a bowl. I'm using the screen metaphor. And we put the grape nuts in there. We have no idea how the nuts are interacting. We just know that they have an output that we like. Okay, so now let's ask an expert, because we talked about this for hours today, right? This unknowing bubble and the concerns, the ethical concerns, the practical concerns that brings up, where do you fit into that? And what are your thoughts about that? So we all walk around with a really big black box that we call our brain. and we're sitting here and we trust each other's good intentions while making this podcast or doing anything else. And I mean, how do we know? Some of it is just, well, we're humans, we're used to each other and so on. But again, there are tools that we have developed in neuroscience that let us peek into the brain and at least into the near future, try to predict what people will do. And we're trying to, I mean, we've been putting decades of effort, the field, not me personally, but we've been putting decades of effort into really trying to understand what it means for you to have an intention, something you're really committed to doing. How do we get an understanding of that? And so I think those kinds of things will help us also with AIs. Are we going to know everything? Perhaps not. And yes, this was discussed today, but I think that various approaches that were discussed in the workshop today show promise towards a future where we will understand better these AIs. And maybe another direction is we will also understand that there are some things we can't understand. And that may lead us to say, you know, maybe there are directions where we shouldn't be building. So using Roni's example of the whatever cobra versus the golden retriever, if there are reasons to understand that right now that is going to turn into a cobra, maybe there are reasons to stop. And if we but right now we don even really know where we are We had lots of different aspects of this that we talked about today So maybe we should bring up somebody a couple people with different perspectives on what we have discussed One of those things is the philosophy behind this, the actual philosophical levels, discussions, which very much in our daily life don't get discussed often. Today we had a lot of discussions. You know, it's the old, don't be evil. I mean, when has that stopped anybody? So I'm Walter Senator Armstrong. I'm from the Kenan Institute for Ethics and the Philosophy Department and the Law School and Psychology and Neuroscience, a bunch of other departments at Duke University. And my whole thing is about ethics here. So ethics is a crux of people's fear of AI and people's acceptance of AI, right? And how it gets used. And this is something that you and your colleagues study at a university level on a day-to-day basis. So what is that field of study? How does it present itself? And what are the learnings that you're starting to bring into the world? Yeah, I mean, it's absolutely central. People aren't going to use AI. They're going to try to stop AI if they think it's going in the wrong way, in the wrong direction. And we need some agreement in society because, sure, people disagree about a lot of things, but they agree about a lot in ethics. And so what people who study ethics try to do is develop general theories, but they also do applied ethics. So they say, well, what if an AI system invades my privacy, finds out financial or medical information that keeps me from getting a job? What if the AI creates a deep fake that other people use against me to ruin my life? What if the AI is being used in courtrooms to decide who goes to jail or who doesn't go to jail and in ways that are unfair to big groups in society? So there are lots of ethical issues raised in every application of AI. So this will be probably a difficult question to answer and certainly at the crux of the controversy around this. We look at different cultures, different societies, different nations, different nation areas, right? We know because of our learnings that Europe tends to be more regulated about lots of things than the West here, than the United States. And Asia tends to be less regulated than all of those cultures. So the question is, as we attempt to regulate these forms of societal norms and we say this is acceptable, this is open, this is allowed, these things are dangerous, we're going to try and put restrictions on them. How do we counter that's a cultural bias that we have through one culture that we say these are things that we think are bad, but in another culture, they're not going to live by those same rules. So if we close down our rules, we allow them to grow faster and take more choices, make more risks, build more power. How do we combat that? I mean, this is the argument we were hearing all the time, right? If we introduce ethics, we're going to fall behind China. Well, and so ultimately it's a fascinating and interesting question from a philosophical standpoint. How do you grapple with that? Well, the first thing I would say is I think people agree a lot more than they disagree. You know, I turn to my students here. You know, do you think parents ought to take care of their children? Yeah. Yeah. Do you think, you know, you should not lie to your mother or your father or your spouse? Yeah. And, you know, people really agree a lot more. What happens is that's just not very interesting. We talk about the areas where we disagree and that gets a lot of prominence. But let's not forget that there's a lot of agreement, too. In those areas where we disagree, what should we think? We should think, I don't know. There's a lot of uncertainty, and we ought to admit it. How do you deal with the uncertainty? Well, let's allow some variation. Different societies can do different things. Europe can do its thing. China can do its thing. The U.S. does its thing. And let's see how they develop. Because in the long run, if you don't have enough regulation, you know, problems are going to arise. And if you have too much regulation, problems are going to arise. So we're all looking for the right point in the middle. And how do we find that out? We're not going to know in advance. We got to try. We got to experiment and see what works. So ultimately, you take an optimistic view of this. You think that more freedoms across the range gives humanity and society the best chance to find its norms and find its value points as opposed to putting artificial restrictions on it. Am I stating that correctly? Yeah. So in the long run, but in the process, there's going to be a lot of damage done. You know, there will be a lot of people whose privacy is invaded and don't get there. And so we have to be careful about those problems that arise along the way and respond to them quickly and try to fix them properly. But in the long run, I think this is a brand new technology. When cars first came, nobody knew how to write rules of the road. Nobody knew about, you know, stop signs and traffic lights and stuff like that. You got to work it out. And we're in that difficult working out phase. There'll be a lot of harm along the way. But in the end, I guess I am optimistic. I have a question to you, which is like distribution and sovereignty of ethics. For example, we did some work and hopefully continue to do work with some Native American tribes and different cultures. So you can not only get by country, but you get very regional, like, you know, this group and this piece of the country are, you know, Tibetan Buddhists. and their ethics and their sovereignty, just like ecology, you know, this squirrel over here should not live in Florida, but this thing doesn't live in Alaska. Do you see like those would be like microbiomes or microclimates of ethics that what you really need is something to adapt to those cultures and those people and maybe localize and not try to have one system that solves everything for everybody? You definitely don't want one system that tries to solve everything for everybody. I mean, these micro-front, look, I disagree with my neighbors on a lot of moral issues. Come on. You don't have to go to Native American tribes or anything like that. So there are going to be disagreements. So what do we want? We want a system that's going to recognize those differences, enable those differences to flourish, and instead of fighting with each other. So we need a system, but the system doesn't mean everybody's got to do the same thing. The system should be, we all get to do the things that we really feel are important for our own lives and that are necessary for us to live well. So follow-up question, and I just walked you into it. The thesis, at least mine, and I think Ori to a degree, is the centrally controlled hyperscalers building AGI do not allow for that really. It's a small handful of people, you have no transparency what they're doing, trying to govern all people all around the world to build global capital efficiency. But human nature, human ethics needs these like almost microcosms and distribution and variation that these AGI systems don't allow. So like from an ethical perspective, when I was listening to you speak today, I was like, these systems, the AGI system will just fundamentally break down. They don't adapt to human nature and the localization and the need for local variants. Oh my gosh. Are you telling me that the people in power aren't always ethical? Whoa, I never thought of that. Yeah, of course. But but that doesn't mean we should stop calling them wrong, calling them out and saying that that's wrong. We have to continue making moral judgments and telling them that there's a better way and you've got to stop behaving that way. It doesn't always work. Absolutely right. But that doesn't mean we should give it up. Walter, I love your optimism. That was great. We are so cynical on this show. We should have Walter come in all the time and say, it's not going to be that bad. You are a permanent invited guest. Thank you very much. Thanks, Walter. We have the Chancellor of UCLA. I'm Julio Franken. Yes, I have been for one year Chancellor at UCLA. It is a big school. It's also a comprehensive research university that does a lot of research on the topics that we've been discussing today. I thought yours was one of the more interesting talks because you went right to where I live, which is who owns these things and what is exactly on their mind. And the other thing, of course, is that as AI starts to supplement things like newspapers and textbooks, its ability to compromise our understanding of what's true is an incredible threat. The topic Chancellor Frank talked about was the idea of autocracies, political systems, and how they engage with autocratic computation, things that look like AGI, and how democracy and maybe more distributed, localized computing is eroding, and these two things are correlating. Yes. Well, first of all, let me say, universities are a crucial component of the AI revolution. First of all, we do a lot of the basic fundamental research that then gets applied in the design and building of AI systems. We convene the sort of ethical deliberations. You just heard a professor from Duke University talking about the ethics. We do those conversations beyond the scientific and technological dimension in addition. Second role, we are educating the workforce and making a workforce that is able, that everyone, it doesn't matter what you study, has a level of literacy in AI because that's what the world of work is rapidly moving towards. Three, we are ourselves users of AI and we can use AI to be better at education, to improve our productivity in research itself. And many universities, UCLA included, are major providers of healthcare. So some of the most beneficial areas for applications of AI, like education, research, and patient care, we do that. And so we're both creators of the knowledge that leads into AI and users of that knowledge, and then we educate people. That's why we have a vested interest in what kind of AI evolves. And what I was talking about based on work that I've done with Ronnie Abubitz over the last few years is that we do have now two pathways. One is what we have called computational democracy, and the other is computational autocracy. And computational democracy is based on ideas of distributed power, which is the essence of democracy, with fundamental respect for the rights of individuals, with an overriding accountability and transparency. The models are explainable, the processes are explainable, and we are accountable for the results. The opposite is computational autocracy, which is based typically on centralization of power, opacity in the processes, and very often in the use of technology for social control instead of for social benefit and enlightenment. And those are fundamental decisions that we need to make in terms of defining the pathway for the future of AI. As a chancellor of a major university and an educator yourself and someone that manages lots of other educators in that environment, you have to have an opinion about students' and professors' uses of these tools and the concept of critical thinking. and there's a lot of discussion about the idea and ideals of critical thinking are starting to disappear. They're starting to erode. There is the goal of self-reporting, right? Students saying, I used AI to deliver my deliverable, whatever it is, but my critical thinking is this and I used it as an assistant, as a tool set, as a benefit versus they keep it a secret, their papers look great, their deliverables are great, but they didn't actually learn anything. You must have some opinions on this and you must be delivering those opinions at university lectures and discussions, right? Absolutely. That's a critical issue. That's why I'm very much in favor of the concept that Ronnie introduced earlier in this podcast of collaborative intelligence. And I think the educational space is the perfect space for that. It's AI collaborating with students in a way that's transparent and amenable to explanation. Right now, universities are going into extreme measures like no longer having homework. All assignments are done in situ in the classroom so that they can be subject to inspection. That's not good. We need to be careful that in the process of using AI, we do not erode the capacity for independent critical thinking because that would be self-defeating because there won't be AI in the future if we stop the current humans from being able to think critically. So the notion of collaborative intelligence, which is completely aligned with the idea of computational democracy, is, I think, totally aligned with the values of a university. That was fantastic. Thanks for trying to sum up an hour conversation in 10 minutes. And it was a pleasure meeting you. I thought your presentation was really compelling. I hope lots of people get to see it. So thanks for sitting in. Thank you. Well, welcome to the podcast. Why don't we start by you sharing your name and title and what your conversation here this afternoon was about. Great. Hi. Thanks for talking to me. I'm Reid Maxwell. I'm the, technically speaking, the William and Edna McAleer Professor of Engineering and Applied Science at Princeton University. And I have an appointment in the Civil and Environmental Engineering Department, the High Meadows Environmental Institute, and I direct the Integrated Groundwater Modeling Center on campus. I thought I had a long title. He's got me beat. Yeah, sorry. I feel like contractually obligated to say all those words. Now we can just like, you know. Let me translate in English. Reed and his co-investigator, Laura, we are going to get her on the podcast. We're going to pull her from the back row. They are trying to save the planet, quite literally. Yeah, I know you as the water guy now. You're the water guy. Right, but just to be clear, like, you know, climate change, climate science is a bad word. We're just going to say it here. It's real. It's trillions of dollars of damage. Asheville was very real. L.A. burning down was very real. If you don't believe in climate change, it doesn't matter. It believes in you. Physics is physics. And Reid and Laura have been studying it, some of the best scientists in the world on it. And their research funding has been mysteriously disintegrated. So at a moment in time where you don't need that to happen, they have disintegrating research funding. And they're the people that will save your home, your state, your life, your farm. So first of all, just a bulletin. please tell your Congressman Senator, please stop doing that. I don't want my farm to burn down. I don't want Asheville to flood. There are things you can do to predict and mitigate. So this is like a PR for, for Reed and Laura, but they're literally trying to save the planet. So happy to have you guys on the, on the podcast. Wow. Thank you very much, Roni. I mean, that's better than I could have said it myself. So, um, yeah, Roni mentioned Laura. Laura is Laura Condon, who is, a frequent collaborator. We were all on a panel together today. And Laura is a professor, full professor now in the hydrology and atmospheric sciences department at the University of Arizona. And I kind of described my work to say that I try to understand how much water we have, how much fresh water we have. And most of that actually is underground. 99% of the accessible fresh water we have is underground. How fast it's being depleted, which is mostly humans. Humans move extract manage a lot of water for not just our own consumption but mostly for agriculture mostly water as food and mostly to grow food and then industrial processes et cetera and then how fast it replenished And that mostly processes in the lower atmosphere what we call the boundary layer and evapotranspiration, how much water moves between the lower atmosphere, plants, and- I got to say this. We have upped the nerd and geek quotient of this podcast by 10x today. Keep going. What was your background to get you to the point where you study water as this unbelievable resource with deep complexity and are building on the science? Ted's jumping ahead here. Yeah, so we use a lot of AI and machine learning in hydrology. And in fact, one of Laura's collaborators and a co-PI in one of our projects was one of the first, you know, was on one of the first papers ever to use AI or neural networks. And then I was at Lawrence Livermore National Laboratory in the beginning of my career. And one of my colleagues there in 94 trained a neural network on groundwater. So like there's been like these underpinnings of machine learning and AI and deep learning neural nets in hydrology for a while, but it's really exploded. I would say that's kind of an understatement. Just in the past five or so years, there's a brand new journal just around hydrology and AI and ML. And so just this real explosion. And so really understanding how AI ML can be useful, which it can be very useful. and then also what its limits are. And so a lot of our work has been understanding how to use just directly data-driven approaches. So we have some data-driven work that's coming out that's this unbelievably high-resolution product of water table depth over the whole US at 30-meter resolution. We use more than a million observations, really kind of truly big data type stuff. But that only works static. We don't have enough data for the subsurface to do this transient. And so we build these big simulation models that kind of run forward in time that can help us predict the future. There's a lot of limits in predictability in AIML that we've bumped up against. Some of these are data. Some of these are outliers in extreme events, which are not in the training necessarily. And this is a lot of what Laura talked about, are sort of these often so-called black swan events that may be physically predictable by our simulation models, but because they're not in our past history that our machine learning models are trained on, they're unable to predict them. As you talk about that, my brain, when you mentioned predictive computer and the requirement for predictive computing, are you starting to study the value of quantum computing as its predictive nature and the ability exponentially to go beyond binary to start to get discoverables on what you're working on? So that's a super interesting question. And we're just starting around quantum computing. And so one of the challenges with quantum and a colleague of ours, Nick Engdahl, who's at Washington State University, published one of the early papers on how can we use quantum computing to solve these big groundwater type problems. And you have to recast your fundamental equations. We solve what are called partial differential equations for these big systems of equations that we solve on supercomputers. and you have to recast those equations so that they can be in a form that fit on the quantum computer. And so we're still in that phase of like, okay, what are the equations? What do they look like? And how might we recast these equations? I think there's some real advantages in quantum because one of the things that, because quantum computing is fundamentally probabilistic or fuzzy or uncertain, a lot of the work that we do is underpinned by uncertainty and processes under uncertainty in weather and future scenarios, but also uncertainty in the inputs to our models. And so quantum can be really fundamentally good at those types of problems. And so there's some real opportunities to leverage that, but we're not there yet. That's a little bit down the, we have to figure out ML and hydrology first, and then we'll make our way to quantum. I hope that people who are listening, it did not go as far over your heads as it does over mine. Right. So, but I do think, but yes, Laura, come on up here and help explain it to me like I am 12. We actually have a 12-year-old here. I think he's understanding all this better than I am. That's the embarrassing part. It's your reason. Laura, who are you? Hi, I'm Laura Condon, and I'm a professor in the Department of Hydrology and Atmospheric Sciences at the University of Arizona. Which is in an extremely dry place, I might ask. Yes. We like to study water where there is none. It's kind of an inverse. Tell us a little bit about your day-to-day practice as well as the larger goals of your work. Sure. I mean, I think Reid talked a lot about what we do, building models to try to understand how water moves across the planet. And the real question, I think the real interesting part of the science of it is trying to figure out what's going to happen as our systems are changing and they move to places we've never seen them go before. So you mentioned I live in a really dry place. I live in the Colorado River Basin, which is experiencing a drought that's the worst we've seen in more than a thousand years. And you can imagine we didn't have stream gauges a thousand years ago. So we're in completely uncharted territory. And that's where science comes in, to try to figure out what's happening and what's going to happen. And of course, that water is supposed to be shared by four or five states, and there isn't enough to go around. Can groundwater and hydrology play a role in saving us from this astronomically huge crisis? Yeah, so I'm not going to talk on the record about the Colorado River. In fact, that is a political negotiation. Well, I think actually part of the problem is we usually turn to groundwater to solve problems, and groundwater isn't often used in a renewable way. It's not being recharged very quickly in the desert in Arizona. So when we're using it, that's kind of a one-way situation. Lower question. So if you're a mom of three living in Kansas, you're a crypto bro in Manhattan, you're a venture capitalist in the Bay Area, and they're listening to you, how does what you do and what Reed does affect them? Sure. Yeah. I mean, I don't think it's a hard sell that we all like water a lot. We will die really fast without it. We like to shower. We like all the things. And we're living in a system that's changing really quickly. And we rely on water. We have a bunch of infrastructure that's built around water being when and where we need it. We're on the blue planet, but almost none of the water that we need is fresh and clean and accessible. And we have a very tight margin actually for when and where we need water. And when we start getting outside of that, we get trouble. So that's, you know, that's basically what we work on. Do you relate the LA fires? You know, because that was a huge event. And I think when Chancellor Frank started like day two or three, LA started burning. No correlation between him joining UCLA, but could you relate like groundwater, water, and what happened there to the things you study? Sure. Yeah. So the water cycle is related to just the land surface in general. So we're seeing hotter and drier systems. And that, as you can imagine, increases fire risk. And so we can see connections between groundwater and soil moisture and dry trees that go up in smoke. So I think it's really all a connected system. And that's kind of the through line of a lot of environmental research is when you start looking at the water cycle and ecosystems and the biological systems, they're all connected. And when we're running out of water, the entire system becomes more vulnerable to all sorts of things. Is that event and others something that you could predict and potentially even mitigate? Yeah, so we can't predict exactly this fire is going to happen on this day. But the goal of the kinds of environmental research that we do is try to understand how risk landscapes are shifting over time. So either this is an event that was not really possible before, which is now possible. And when you're planning, you need to think about or now risks of things that were possible before are just much higher. And so it goes to just kind of long term preparedness and planning. Thanks for coming on the show. Should we bring up the next generation? Yeah, somebody who is actually... Oh, we have an MIT professor. He's 12. Oh, the 12-year-old? No, no, no. No, Uri's. Uri's son. Yeah. We want you to come up. Come on, we've never had anybody under 20 on this show. You're the youngest guest in 268 episodes. Exactly. I might not fit in the camera. My name's Danny Mahal. I'm Uri's son. I got to skip school, so that's the best part. and I go to North Hollywood in LA. We flew all the way over here. So I get to miss some homework too. How old are you? I am... This is a difficult question? 12. 12. Okay, so what did you learn today? I learned actually quite a few and we talked about a bunch of different topics. I just want to point out a main topic. We talked about consciousness and the idea. This all comes from intention And now there's different, there's many ways that you can interpret consciousness. But one thing that comes to mind is awareness. And I think that's a big part of AI as well. It's how aware of their surroundings and how if they're self-aware. There was an experiment by, I'm blanking. Well, maybe I'll remember later, but there was, it's called the trapped man. And they had a bunch of experiments where they tested a bunch of models like ChatGPT, AI, and Grok. And they gave the prompt that the main idea is that there was a man, you could blackmail him and send the information to protect you from shutting down. And it's like a way to, so they're aware that they're in this situation. Now, you give them this situation. It's not an actual situation. You don't know if they're, it's theoretical. You don't, if they were actually in this, what would they do? And I think it could also probably if they're aware of themselves or do they have, I want to say free will, but I think if they are aware that they can control things in the future when we put, I don't know, Chachy Pt in a robot. So I think that's a big part. And it's going back to the black box idea that we had where we can't really see inside the process between the input and the output. A big part of that is that it's just so many weights and neurons that we can't see. I had this idea, well, I don't know how effective it will be, of it could take a while, but you could train an AI in steps. So it's like teaching a first grader calculus. You want to teach them all the math before it, before you do it, so they understand it better, so they don't just bridge the gaps. I think not only will it make the AI more understanding of the topic, It will also give us insight because we can check, we can test the, as my dad said, we can stimulate individual neurons in each, in the neuron network. So he could stimulate them and see what happens. And I think that's the main part. But with the models that there are now, it's really hard with the trillions that there are. Just, I wanted to reiterate, we made earlier in the conference a standing offer to hire your son, Uri. So continue on. I mean, that's about it. though there is one thing i wanted to say and there was this uh non-profit company and they had this prism almost like this uh with a bunch again i'm blanking i'm 12 i shouldn't be like this um but when yeah at 12 most of us were trying to figure out which end was up and probably running around in the dirt so you're doing quite well i'm advancing oh you're much you're like a whole another sentient evolution past all of us i've never heard that one before oh yeah well you're 12 12 there's probably a lot you haven't heard i think that there was they had this way to almost measure all the different neurons and weights into a geometric shape and that's it's the way that it's so simple i'm they trained it i'm guessing on one model and i'm seeing if you could train on different models you could see how the kind of changes if it does change at all because it's a fractal, so it's a pattern. But we should just geek out on that for one second. I think what was really cool about what you brought up, and I'm glad you did, is at the very beginning we talked about a concept called universal intelligence, which is intelligence is at a property of physics like energy and matter. And what was really interesting about the discussion was they were showing these structures that appear in different forms, potentially both biologic and computational, which may indicate intelligence is a property, not limited to humans and shows up in particular forms like matter and energy take different forms, solids, fluids, gases, crystal and shapes. So that was the best observation I heard from anyone today. You know, you are actually like 12 going on 21. I also teach at Chapman. I don't think anybody in any of my classes this semester was as articulate as you have been. So thank you for that. What I want to know is how are you using AI today as a 12-year-old on a daily basis? What do you do with it? What does it help you with? What are you scared of? What, you know, give us a little insight. I think the main thing that I use is to help me with obviously school, but I don't make it run, write the whole essays because not only do I not learn anything, that I could get a completely bad grade for it. You know, it's plagiarism. So I think that using it as a tool, make me make you a template. For example, when I'm 21 and I want to make a contract for, or I'm getting a contract and I want to make a new one, I can use AI to make a template for me so I can properly structure it instead of having it write things that I might not even look over. And it could also be wrong, which is, I think is the big part. Okay. I am more hopeful for the next generation today than I was in the morning. So one last question, and then we'll get another adult to come up. But you're 12, so your whole life is in front of you, right? All this technology revolution that we have been discussing, all of that is ahead of us. Do you think you and your peers are optimistic? Are you scared? What is your general mood about technology? I think that it's fascinating. I especially like making, I want to say, objects with technology. drones, the fans and motors, and they go fly. And I think that's the coolest way to use technology in our day-to-day life and to make these like gadgets. And I think AI could be, there's Waymo and there's these Waymo cars that are self-driving. No one needs to be at the wheel. You can just watch TikTok on your phone while you drive somewhere. And I think the biggest part, the biggest part is helping us in our day-to-day life. So I think without, with AI could help us a lot, making our life more, or making day-to-day more efficient overall. And yeah, I just mean. Two very important questions for you. Which Harry Potter house? And are you a Jedi or a Sith? Well, I'll go to Star Wars first. I like to think I'm a Jedi, but at 3 a.m. I'm a Sith. because I grab my snacks out of the pantry. You never know. And which Harry Potter house are you? Are you Ravenclaw? Are you Slytherin? I think I'm Ravenclaw because I'm somewhat knowledgeable. I think I'm – I love Hufflepuff too because I love animals. There's this bee. It's really – yeah. You get to take them home? I think I'm Slytherin. I think everyone has all the classes. I think I'm also Slytherin because I have a bad side to me. And then I'm also Gryffindor because sometimes I can be brave and do, but if I had to pick because Harry Potter was in it, I have to say Gryffindor. He's a bigger nerd than Rony. I'm very glad. I didn't even know that was possible. I'm very glad he just sorted to Gryffindor. I think your dad has to watch for the 3M Sith tendency so you don go Anakin Skywalker on us because you could be the savior of the force or you could get really dark So I leave that to your dad Well, thank you for having me. Yeah. Thank you. Our next guest on this rotating concept of learning, which we've been doing all day. We're distilling some information down into bite-sized bits for our audience that listens all around the world. So same question. Who are you? Why are you here? Or what do you do? Yeah. So I'm Aaron Sugar. I co-direct the Lucid AI Lab with Uri Maos at Chapman University. I'm also on the faculty of the psychology department. And I do research. I study perception, attention, consciousness. But I also have another line of research that's focused on the initiation of movement, the spontaneous voluntary initiation of movement. So how you initiate an action when there's no stimulus or cue right there that says, you know, like the traffic light that turns red and you press on the brakes. Like you're sitting there and you spontaneously pick up the phone and call your mother. How does that happen? How does the brain, what does the brain do if you initiate actions spontaneously? Isn't that kind of cognition unique to humans and we're trying to replicate it in a black box? Yeah, I'm not sure about that because some of the same kinds of brain signals that show up in humans before you do a spontaneous voluntary action also show up, for example, in the crayfish. before it transitions from rest to foraging or show up in rodents or monkeys. So I'm not sure it's unique to humans, and I'm not sure it's specific to consciousness either. So how often do you think about the moment where you pick up your phone and you call somebody and that person on the other side of the line says, I was just thinking about you. That is spooky that you just called me and I was just thinking about that. How does that relate to the area that you study? I mean, I don't think it relates to the area that I study, but I've had that experience many, many times. And in fact, when very often I commute to work and when I'm driving to work, very often I call my mom who lives out in Gloucester, Massachusetts. And that happens very frequently. I was just thinking about you. I was just expecting you to call. One of our team members, our creative design and UI director has a twin. and he's basically, we were at a conference and the idea of communicating telepathically is he's like 100%. And I met another group who's also twins and they go, they absolutely communicate telepathically. And I've got folks that we work with at SynthB who used to run Bell Labs and they focus on quantum communication that when you can get it quantum entangled with someone and that you can communicate at zero latency. is actually something that is studied at certain research labs. And I think there was a group in Japan that was trying to do some quantum communication. So the idea that it's studied by some really serious groups in physics is not totally out of the realm, that somehow biologic specimens have that ability. So it was kooky to begin with, but quantum communication is actually invested in being studied quite intensely, like quantum computing. I'm not sure that entanglement per se impacts on my research, But I think that indeterminacy might, quantum indeterminacy. I mean, we have the recent Nobel Prize show that quantum effects do matter on biological scales. And I think that the brain is a chaotic system and that when you're confronted with decisions that are spontaneous and where there isn't a clear reason to move now or 500 milliseconds or one second from now or before, right? Why do you move precisely when you did and not at some other moment? And ultimately, we might find that those can be traced back to quantum effects. Yeah, Malcolm Gladwell wrote a bestseller called Blink that touches that, or at least works on the discovery of that. I don't know if he really actually comes to any valid conclusions, but at least he opens the door to learning what the brain is actually doing at that moment, that instantaneous moment where you make a decision before your conscious mind actually allows you to make that decision, which is interesting. If anybody that's listening around the world is interested in going on a deep dive about what you just talked about, there's a podcast called The Telepathy Tapes that is very worth listening to if you like this. Your design head, who's a twin, should absolutely listen to The Telepathy Tapes and see if he finds that relevance. Maybe we should have him on at the end of this, just to close it out. So the influence of quantum effects is certainly speculative, but it's really fascinating. I think that there is room for things like the butterfly effect in the brain. And the fact that very, very small effects at those levels might actually filter all the way up to, you know, full on movements that can have, you know, massive implications. That's an interesting, like we could do a whole nother hour podcast just on this, but where do you fall in the realm of science and study that you are living in as it relates to our topic today, which is AI in all of its forms? What is, what's the relevance to that from your perspective? Yeah. So I think the relevance here is that, you know, people have been working with these kind of signals in the brain, like the one I mentioned that we share in common with the crayfish that precede, reliably precede the initiation of spontaneous movements. We used to think that those slowly ramping brain signals, they build up over, you know, as long as a whole second or more. We used to think that those were the indicators of the outcome of a decision, your brain getting ready to move, and that your conscious decision to act was just an afterthought because it comes really late in the game. But what we've shown more recently is that those signals might actually be a process leading up to a decision rather than the outcome of a decision. And so that the actual commitment, the decision to initiate movement, which is what would actually be predictive of your future action, happens roughly coincident with when you feel yourself to have been conscious of your decision to move. That's great. So we have a few more minutes. Fascinating. We're trying to get a few more people in. Thank you. If we ever run out of guests on the show, which we never do, we have a plethora of guests that we can invite. Absolutely. Everybody at this conference will come back in 2026 or 2027. Tell us who you are, what you do, and why you're here. I'm Gabriel Kreiman, a professor at Harvard. I do research in computation, neuroscience, and AI. I'm very happy to be here to talk about AI and sensors. So this is our rapid fire section. We'll give you guys two minutes each. You did a great presentation today. You sat in. You were at dinner. How would you summarize for millions of people around the world listening? What was the most important take-homes for you today? And what do you want people to take home about the idea of intention, AI, and the future of kids who are 12 and younger? I'm super excited. I'm leaving this meeting very energized. Seeing young kids that are so eloquent and passionate, that's very exciting. And also, I think I take away the challenges, but also the opportunities to rethink about AI, how we want to align humans and AI to build a better future with AIs that can actually help us and be constructive in our world in the future. I did want to ask you one question. You don't have to answer it if you don't want to, but I think it's a good one. Aligning humans and AI, but how do you align humans with humans? That's way, way harder, I think. So it's a great question. Maybe actually the communication between AI and humans can help us better understand how we can communicate with each other. Amen to that. That's awesome. So our next guest had one of the most fascinating presentations today. And this was what our young 12-year-old cohort was referring to about the drawing of this sort of triangular shape in terms of the study of the neural synapses that bring all this information and intelligence into the world and that you were studying this from a cognitive provability standpoint. Am I beginning to state that correctly? Who you are, 60-second summary of what you're doing and the coolest things you learned today. Sure. Yeah. So my name's Paul Rikers and I co-founded Simplex with Adam Shai, who's here. And yeah, we've been trying to understand what's actually going on inside neural networks. They're notoriously black boxes. I think you've been calling them the hidden layer that despite their capabilities, no one knows how they work, even the labs, right? They're more grown than designed. And that's a big problem if we aim to trust them. And to be honest, I think no one's really tried that hard, or at least we've gotten lucky with an approach that I think we have a lot that we can build on. So we have kind of this first result that we're building on that we were able to anticipate even the geometric embedding of different contexts in an LLM. It ends up having this beautiful fractal structure, which people were pointing at. And I think what I'm excited to share is that there's really something to build on here. If we can maybe get some of the big companies to invest more in understanding, we'll have more optionality for moving forward in better directions. And so, yeah, I mean, I have a lot of excitement for that. We had great discussions today. So last question for you. Are you hopeful for the future? And if you are, what are the things that must happen in this field for you to be hopeful? Yeah, I've been an optimist my whole life about basically everything. And the last few years have been really tough for me because I've been really pessimistic about the future. And I think the default outcome of building AGI is pretty bad. If we really think that we're building an intelligent system and we make it autonomous, I mean, why would we do that? We should do things on purpose. We should be thoughtful about it. But I also think that people are talking past each other right now. People have very different opinions. And so if we can build up the basic science, I hope we can elevate the conversation. And I think that's a lot of what our work can provide. That's a great summary because when we start collaborating with Dr. Mose, the idea of the intention of the people building these systems and then the intentions within the systems. But you're asking the question, what's the intention of the people building in the first place? Why are you doing this? What good is coming out of it? Yeah, exactly. Let's understand what we're doing and then do things on purpose. Amazing. All right. Our last guest has a cowboy hat and is a legitimate cowboy and has a key role at SynthP. So introduce yourself, what you do, and we're going to talk about telepathy for 60 seconds with you. Fantastic. Jared Picklin, I'm the Chief Product and Design Officer at Synthi because it sounds like C-3PO. Correct. And I'm really excited to be participating in the metaphor of how humans and these advanced computing systems, collaborative intelligence, will interface. So Jared and I had an interesting discussion a little while ago over coffee, over Cuban, over fabulous South Florida Cuban coffee, which is an unbelievable elixir of life. Shout out to Vicky's. We love to get free Cuban for the future. Vicky's coffee. There you go. So we had a fascinating discussion. And Jared's background comes from a company that has an amazing pedigree that you both know well. And anybody that really studies Apple culture and Apple design knows a company called Frog Design. So I think that was part of your attraction to Magic Leap and why he was brought into that world. You guys have known each other for a long time. So before we get into the telepathy side, because now I've learned you're a twin, and we're going to talk about that, let's talk about your career a little bit, your career arc, some of your design ethics, and what you're bringing into this world of Synpy. I am, in fact, a product fellow. I started my career in product design as a kid who could spin laptops using macromedia softwares, which got me in the door. I had spent 14 years learning about user interfaces, and I found myself in a place where I got to work on a lot of bleeding edge interfaces and help set some of like contribute to some of these new patterns of computing. And that was a really exciting time. We were reshaping the entire world in terms of handheld mobile computing deploying and then the Internet deploying. And then I got to participate later with a company, Argo Design, which I co-founded with Mark Ralston and Mark Auger with wearable mobile computing. And so now this is like a fourth or fifth wave of how to humans and these systems which have more computing systems, but they are quickly coming to represent another source of intelligence. Right. And one that is as capable as we are. And I'm really happy this week that the conversation has shifted from capability to compatibility. And I think that's where the whole industry should be no longer talking past each other because we've been lost in this. Is it good enough? Is it smart enough? Is it real? Is it lying? Is it not? What's more important is, is it compatible with the goals of humanity? Okay, from a self-described humanist. So one of the things we talked about in our little coffee conversation was our deep fascination, just because of the size and scope and potential disruption in power, the acquisition of Sam Altman of Johnny Ives and his enterprise, I think a $6 billion price tag, which is, you know, no small sum for something where nobody really knows what they're doing or why they're doing it. But Johnny Ives obviously has quite a pedigree as the guy who worked hand in hand with Steve Jobs to essentially create these very humanistic tool sets that people use on a massive scale all across the planet, right? So do you have any thoughts or opinions about what may or may not happen with compute in those hands? And maybe where does SynthB fit into that equation? First of all, we'd like to, though I've never met him, personally thank Johnny Ives for setting a new high watermark for product designers in the employment environment. I'm not sure I'll get to participate at that level, but it's really interesting. There was a true from the outside looking in symbiotic nature between Steve Jobs and Johnny Ives where it seems Steve was able to create deals that would bring in opportunities and features for Johnny to wrap with his exquisite hand. Right. It's really interesting. And time will tell if that same relationship exists with Sam and Johnny. I don't think he would have gone in there not thinking there was a possibility of that. But this is a brand new pattern. And in computing, there's really been about three or four patterns of computing. And how do you go with no UI? How do you go into like something that has entered the age of comprehension, meaning you almost compute with it, not with a metaphorical interface, but literal communication? I don't think that'll go so far. I think what he needs to establish and what we hope to see is that he establishes a metaphor on top of that that puts people in the right disposition towards these systems to get the most amplification of value at it. That is if I don't beat him to it first. Yeah, I guess we'll see, right? We've talked to almost everybody who is here. Except we have to at least have 30 seconds on the telepathy thing with the twins. Very quickly, we introduced this notion of compatibility being more important for capability. And we heard a lot of experts today talk about measures. And you have your Turing test, right? But I have my own as a twin. I think we measure compatibility perhaps by the capability of these systems to have a telepathic connection with humans. Anyone who's a dog owner understands this. Anyone who rides horses understands that these are very compatible intelligences to humanity. And part of it is you have some level of telepathic communication with them, right? And I also add the alien standard, right? If predator comes down to earth, right? I bet there's not a really good telepathic connection with it. So we need some, maybe this is just an imaginative standard, but we need some very grand standard to understand the level of compatibility, because if we're not compatible, it's okay to have a kill switch. And we don't need to like argue about, do we need to extend human rights to it? We can just turn it off when it starts, you know, invading our space, our resources, hit the kill switch and move on. This is the second Blade Runner level test we discussed today. That's our show, everybody. It's quite a show. That is our show, everybody. Thanks for listening in. Wow, an applause. That never happens. It never happens. We always say goodbye, guys. Next year, same time. All right. Thank you. Wow, that was great. Thank you. Keep watching!