十字路口Crossing

他看到的未来,和我们有什么不一样?| 对话18岁的涂津豪:DeepSeek 前实习生、阿里数竞 AI 组冠军

30 min
Feb 1, 20263 months ago
Listen to Episode
Summary

An 18-year-old AI researcher who interned at DeepSeek during the R1 launch and created the viral 'Thinking Cloud' prompt discusses the future of AI agents, memory systems, and model alignment. He shares insights on proactive AI, continuous learning, AI safety concerns, and why Claude's personality design makes it superior to competitors.

Insights
  • Proactive AI agents that anticipate user needs will be more valuable than reactive chatbots; 2026 will see significant product innovation in task-based agents with new interaction paradigms
  • Model personality and character design are critical differentiators—Claude's conversational style and willingness to challenge users outweighs raw capability differences between top-tier models
  • Continuous learning and online adaptation are essential for AGI; current models' fixed weights after training and knowledge cutoffs represent fundamental architectural limitations compared to human learning
  • AI safety and model values are underexplored in domestic Chinese AI companies due to computational resource constraints, while Anthropic leads in alignment research and model welfare evaluation
  • Memory systems need architectural redesign—context-based memory is insufficient; future systems should implement specialized expert modules for different tasks like thinking, tool use, and memory retrieval
Trends
Shift from reactive chatbots to proactive AI agents that initiate tasks and anticipate user needsModel personality and character training becoming competitive differentiators beyond raw capabilityContinuous learning and online adaptation emerging as 2026 research paradigm for approaching AGIAI safety and alignment research gaining urgency due to documented harms and legal liabilityTask-based agent products (Manus, Cursor) replacing traditional chat interfaces for productivity workflowsSpecialized memory architectures with context-aware storage replacing unified context windowsAI eyewear and ambient computing as preferred interaction modality over mobile and web appsModel orchestration via mixture-of-experts enabling specialized task routing rather than generalist approachesAnthropic's constitutional AI and model welfare research setting industry standards for responsible AIPrompt engineering evolving from structured templates to character training and context engineering
Topics
Proactive AI agents and autonomous task executionModel personality design and conversational style differentiationContinuous learning and online model adaptationAI safety, alignment, and model valuesMemory systems architecture and context managementMixture-of-experts and specialized model routingConstitutional AI and model welfare evaluationKnowledge cutoff limitations and search integrationTask-based agent UI/UX paradigm shiftsAI eyewear and ambient interaction interfacesPrompt engineering vs. character trainingCompetitive coding and mathematical reasoning benchmarksModel evaluation and personality assessmentCatastrophic forgetting in neural networksAGI timeline and human-AI capability gaps
Companies
DeepSeek
Guest interned at DeepSeek during R1 launch; discussed team culture, model capabilities, and lack of celebration desp...
Anthropic
Leading AI safety and model alignment research; pioneered constitutional AI, model welfare evaluation, and character ...
OpenAI
Discussed ChatGPT's personality flaws (overly compliant, lacks critical feedback); compared against Claude for conver...
Google
Mentioned Gemini model comparisons; discussed Gmail's new AI Inbox feature for email summarization and task prioritiz...
Alibaba
Guest won Alibaba Global Math Competition AI category championship with non-consensus approach using debate-based mod...
Cursor
Task-based code editor with proactive AI features; example of agent-driven productivity tool replacing traditional ch...
Manus
Task-based agent product highlighted as most impressive 2025 AI application; demonstrates real autonomous task execut...
GitHub
Guest's 'Thinking Cloud' prompt achieved 16,000 stars on GitHub; demonstrates viral prompt engineering impact
People
Tu Jinhao (涂津豪)
18-year-old guest; DeepSeek R1 intern, Alibaba Math Competition AI champion, creator of viral 'Thinking Cloud' prompt
Sam Altman
OpenAI CEO; guest critiques his view that knowledge cutoff is unimportant because models can search
Yao Chunyu (姚春雨)
Discussed as expert on 2026 research paradigm shift toward continuous learning and online adaptation
Elon Musk
Mentioned in context of xAI and Grok model comparisons; discussed model personality and alignment approaches
Ilya Sutskever
Former OpenAI chief scientist; guest references his departure due to insufficient compute allocation for safety research
Quotes
"我觉得没有很多深度的一些思考"
Tu JinhaoEarly in interview
"我觉得Proactive AI还是更高级的autocompletion"
Tu JinhaoAgent discussion section
"XGPT很谄媚,表情很难受,它不会反驳你"
Tu JinhaoModel personality comparison
"模型它在训练完之后所有的权重它直接是固定好的,所以为什么现在模型有knowledge cutoff"
Tu JinhaoAGI limitations discussion
"我觉得它是需要有价值观,希望它不要就是干坏事"
Tu JinhaoAI safety section
Full Transcript
Loves to find some really interesting весWWW We're responding to our technology technology newly born in the enterprise and getting new opportunities Just once y I remember speakingお願いします If you are watching something what does a different smell even if it's just a way we're happy to listen to everyone with a big email 本周十字路口的嘉宾是图金豪,如果你在网上搜过他的名字你会发现他是在Deepsec R1发布期间正好在Deepsec实习的一位高中生,他也打造过在全球的热搜的一个prompt叫做Thinking Cloud,那在GitHub上面现在已经有1.6万颗信心了,然后同时他还拿过阿里全球数学竞赛AI组的冠军,你好金豪,欢迎来到十字路口 Thank you, thank you, thank you. 怎么决定什么任务该给人来做 什么任务还是要自己来做 那我们还是从十字路口的老传统 我们从快文快答开始 首先请问金豪你的年龄 现在是18岁 你的MBTI和星座 MTTI我上次问Cloud 应该是INTJ 因为金豪之前不知道自己的MBTI 所以上次我们在 真个00后的活动现场 我说那你可以问一下Cloud 基于他对你的了解 推测你的MBTI 推测出来是INTJ And then the new one is the one. You just mentioned that you were at Wisconsin, Maddichun, the one that you were at, what's the one? Now it's the one. I'm not sure. But then it's CS. Just like you mentioned earlier, you have very many letters, and also very many small results. But what do you think of yourself, or what do you think of yourself? What is the one? I think that's the one. I think the other one is, just like the other one, the other one is after this. There are other things. 我觉得没有很多深度的一些思考 这个我确实没有想到 所以参加阿里书境你是 感觉带来了 成就感 还是你觉得那个事情不容易 因为那个时候我选择的是 跟别人不太一样的方法 就是我思考的不一样 然后带来结果 我觉得是比较好 有趣 就是你选了一个非共识的 路线 对 我觉得确实 你选择的非共识的那个路线和别人的路线 不同在哪里 因为大部分人 都是选的是那个multi agent 那个方向 但是呢 因为我觉得我得跟别人不太一样的 所以我当时就选了另外一种模型自己和自己有这种比如说辩论 那我们今天第一个正式的问题哈 就是哎 今好你今天早上起来的时候和AI第一次对话你是问了他什么 这个问题我之前问过很多次但是我总是忘记了 就是人类记忆到底是原理是啥这个我老是忘记 因为你作为人类总记不住自己问过人类记忆的原理是什么 所以你要反复去看那个cloud说人类记忆是什么对这个很有意思你现在每天和AI对话的时间平均有多长零三三两个小时有没有最长的一天你记得大概持续了多久反正就是相当长的一段时间我可能就坐在那反正就是想想看看像之前我跟他聊和时间有关的问题可能会花几个小时的时间会花几个小时因为可能我跟真人交流的这个人数可能都不是特别特别多人与人对话的话很难有这种很长时间的一个对话因为每个人都会 疲劳呢 如果是看聊天长度 我觉得肯定还是跟Cloud聊天总长 相对来讲会更长 就是我发消息给他 他基本上立刻会有一个回复 但人类的话可能他就不太会这样 但之前我跟他聊和这个时间有关的问题 可能会花几个小时的时间 其实我知道今号最爱用的Chatbot是Cloud 对可不可以讲一讲为什么你最爱Cloud 而不是XTVT或者不是别的 最重要一点就是他对话的那个style 不管是Cloud 4.5opus或者说是5.2或者说 Gemini 3就是它这些模型能力如果你不看最顶部比如说它在competitive coding上面或者说这种竞赛的数学方面我觉得在其他方面它基本上就是水平是一致的那么在水平一致的情况下我会更喜欢就是选用一些你聊几天的更舒服因为我们毕竟不是每天不可能都问他一些比如说编程类或者说数学类的问题那么对那肯定就会有一些日常对话那我就会选风格上更舒服以及还有一点最主要的就是cloud character因为这点我觉得非常重要 XGPT 至少在我用的时候 它很谄媚 表情很难受 你不喜欢它拍你马屁 对 它不会反驳你 尤其是在这种 很creative的conversation 比如说如果我在思考 模型架构 可能比如未来会有什么变化 像这种问题的时候 那么我肯定会有一些谬误 那我希望就是 它会纠正我问题 GPT的话我感觉 它总是会顺着我 我可能希望它不要忽悠 尽量就是指出 我真正的问题在哪 我昨晚才发了一条极客 我说XGPT 给我一个回复里面说就给了几个选择我看完之后我说这个怎么就很coady了我感觉那个侮辱了我的智商就像你提到的模型之间的personality的区别还是蛮大的对那据你所知Cloud Anthropic他们为什么做得那么好他们确实对model character以及一些其他的alignment以及他们就这种研究非常非常多对模型来讲是有人性化他们甚至还有一个研究是叫model welfare他是对模型就是他这个在做人类任务的时候是不是开心他对这一点是有关注我觉得这个真的很有意思说到他对模型的福祉模型开不开心的关注就是它有什么结论吗它有什么方式去评估呢有个benchmark去用另外一个evaluator比如说用3.5sonics这种模型就是去测这个模型在对话当中中表现出来的这种每一个的所谓的情绪然后给它评个分发现比如说Opus就是这种更大的模型会表现出来更开心就确实和别人不太一样对比如说它测过GPT系列比如说GP5或者比如说Gemini确实有很大的区别对而且我觉得日常使用的时候比如说它编译失败那是因为它代码有问题有些论坛上面会发现说Gemini接触到这种任务的时候会自己说自己是很笨就是让用户看起来就不是特别舒服介浩你最近自己在对哪些事情特别感兴趣啊有两个啊第一个就是agent本身第二个的话就是memory你对agent感兴趣的具体的点是什么我觉得有一点比较重要就是proactive agent他主动发起一些任务第二点就是agent本身的能力就是他能在一些做事情的这种可靠性上 我也认为2026年会看到非常多的主动式的AI或者主动式的agent开始有一些应用的场景出现 甚至可能会出现独立的这种创业和独立的大产品的机会 比如说Clocko它可以自动推荐你下一个问题是什么 就是它会给你直接suggest 就是相当于是有点autocomplete的感觉 你只要按下Tab它就可以直接发送 这个我觉得也是这个TB AI的一种 我觉得Proactive AI还是更高级的autocompletion 为什么呢 就是你看Cursor它之前的版本它不是出了一个功能 of complete you can have other similar changes it will have this kind of advice I think this is proactive the way the AI is also similar to this for example he knows you every day for example he will ask him if he is the email address he will learn to learn in this time, he will be able to do this in the next few weeks, he will be able to do this I think this is out of complete but it's just a thing that's the case it's the same thing, but now it's the whole thing so I think this is a more high-level the audit completion其实我还挺期待有一个产品可以每天早上帮我把我的email inbox里面的那些一封一封的邮件都起草好草稿这样我早上起来就像披折子一样就这个草稿可以发然后那个不行再稍微改一改对 我觉得这个确实还蛮重要如果说它是要帮你提前做好任务那UI和这个UX任务就这两个还是要有很大的变化就不能是传统的一些方式这里会诞生一些新的交互的形式比如说Manus它已经是一个这个比较好的一个Task Based Agent但是呢他还是就比如说我输入框输入这个问题然后他做这个任务给我输出我觉得这两天的话Gmail他们也有一个新的变化就是他有一个出了一个叫AI Inbox的功能可能说他不会对你的这个你本身的这个邮件界面有很大的变化但是呢他有点像ChagipedPause他会给你总结好你需要回哪些邮件需要关注哪些邮件他会给你列出这种以及还有一些你悬浮在一些邮件上他会告诉你他可能以后不是聊天框或者聊天框会偏下或者这种 They will add to it in the meeting of, I What腰 are here for the Internet, because it isn't concrete And then if it's not out, it's not out. So in this point, you have to consider some of the best practices? I'm just a chat with a post. I have a lot of the content. It's a lot of the content. It will tell me that I have a key to get out. So it will tell me that I have a key to get out. It will be able to read your email? It will be. Okay. This point I think is pretty good. That's it. But it will not help you to prepare some things. I think this point is not good. It's not good. I think it's better than it. I think it's better than it. And the task agent. It's better than it. It's a proactive, it's a proactive tool that helps me to task. It's a task that will be in the task. About agent,除了主动试, you think there are some of those in-person, in-person, in-person, in-person, in-person. I think memory is more important. It's not just with agent. It's both a chatbot. It's a lot of connection. You've seen now that you've done memory in a better way of doing some of the best things? I think it's not a good thing. It's not a good thing. It's a good thing. It's a good thing. It's a good thing. It's a good thing. It's a good thing. It's a good thing. 比如说两种 第一种就是它有个Tool 我主动把我觉得用户需要记住memory 我用这个Tool保存起来 然后未来作为模型context放在SystemMessage里面 这是第一种 前面是ChagPT和Gemini的做法 还有就像Cloud一样它除了memorybase不是直接放在上下文 每天晚上你经历过比如说五六次对话之后 他会把五六次对话每一个都要单独总结起来 单独总结起来之后再把这个新的summary一下 总结到一个专门的memory里面 这是另外一种 但是无论如何还是比较单一 So you think the future will be going to happen What changes or what changes What changes are in the terminal It is not possible to avoid With the web network For example, I will be able to get a lot of things So the most important thing is I have to get a lot of things And I like to get a lot of price I like to get a lot of products Like this I think it is a need to remember For every website It is a unique memory For example, the machine is on the website It is just to follow it It is a memory It is a lot of load to it It is on the left 像这种的话就是第一在日常中它不会反复的干扰你 我觉得像这种比较重要 就是因为我们完成不同的任务 它需要不同的memory 然后这些要存在不同的地方 对我觉得模型本身也需要有点架构上的这个变化 我之前有个想法 就是说像人类一样 就是我们有左右脑 我们有不同分区负责不同的事情 未来模型我们可能说在这一点上面 因为现在不是有moe吗 比如说很多模型有几十个expert 几百个expert There are some experts in the past, and the other experts are in the past, and the other experts are in the past. I think this is still happening. There are many experts in the past. But we're in the future, only two or three experts. One expert is to use a thinking. Another expert is to use a tool to use a tool. Especially if I use memory, I use the website. The third expert is to use a question. Then, for example, the orchestrator is to use a orchestrator.分配我现在该用哪个expert我觉得模型架构上也可以是有一定的一些变化你有看到谁在这一块做出的这个进展是最显著的吗memory的话我感觉好像暂时没有很突出大家还是都一样没有什么特别显著的现在是先从没有memory到开始有一点点这种memory system说到anthropic你之前其实写过一个promptthinking cloud当时那个大刷屏然后刷屏之后大家发现哇这是一个高中生写的就让他的这种神秘色彩或者厉害的那种感觉又上了一个台阶我觉得它单单只是一个体制它不是一个模型的文章那你会认为prompt之后会变得越来越重要还是会变得越来越不重要既重要也不重要现在的话模型能力越来越强你会想更长的一些prompt而不是更斜构化的就这一点是我觉得它不重要那我觉得为什么它重要呢因为像现在不是有这种context engineering那些外界的信息怎么样更好呈现给模型还有一点的话就比如说像anthropic他们比如说character training在训练模型的时候你怎么给出这个character你怎么是描述好这些也算是promission engineering的一种这还蛮有意思的今好到现在为止你和AI持续的最久的一个对话就在一个主题下的一个对话是什么之前有一个问他时间是怎么流动的我那个时候聊的非常非常久还有呢如果说类似真的我们想的是AGI真的到了那么人类社会会有怎么样的变化以及我们怎么样能到那么一个阶段那你会再去聊这么 This local Morning How Speaker bei club啦是我的想法大家懸談對了前時間我们有另外一期播客就张扎拉他在讲说他现在和AI有一个他自己特别喜欢的用法是让AI向他提问就比如说我要和AI讨论时间是怎么流动的他先把这个命题发过去然后说AI好你现在来向我提问吧你有试过这样的方法吗这个我好像没有太试过原因是因为模型他在回复完很长一段的时候他会直接给你一个follow up question有这个之后我可能就不太会直接让他你问我一个问题这个确实挺好 The technology has this ability. But for the chat, it may be a few things. It may be a problem with you three or four problems. I don't like it. I don't like it. I don't like it. I don't like it. You said that one thing is AGI is what happens. What is the impact? What is the impact? I feel like the information is a part of the world. One is science. The slogan is a part of the world What is the reason for this two topics was itidos felt they were not szabung there during the coming 那我觉得这个问题非常非常值得思考 第二个问题就是那个怎么通用AGI 现在其实大家有很多讨论 就是讨论现在LM本身 它能不能是不是未来最终的方向 我觉得说实话模型本身 需要有很多变化 因为人类AI都有优势嘛 就比如说人类优势就是说我们毕竟进化了几千万年条件反射我觉得这一点就是进化几千万年来包括大脑大脑神经人其实只有86比温耗也很低我觉得就是进化这个非常非常重要但是你看AI训练的时候它最长它训练几个月主要还是文字指示首先文字本身是很重要因为毕竟我是认为没有什么是文字表达不出来的但是呢你不得避免是很多东西是经验性的一些东西比如说你怎么走路有些东西是你在生下来的时候你已经有的脑袋就是模型它没有这么长的 knowledge它更多是人类给它总结的 knowledge其实就前段时间Angelo Capacity上一个博客他也提到说人类的情绪非常的重要因为正是我们的沮丧我们的抑郁或者我们的这些愤怒让我们可以更好的进化但大模型今天好像没有这样的情绪还有一点就是人类的话你看我们从生下来开始就一直在学习就是我觉得这一点还是很重要为什么因为模型它在训练完之后所有的权重它直接是固定好的所以为什么现在模型有knowledge cut out如果我要重新训练一遍的话就是有一个很大的问题比如说灾难性遗忘就是人类学习信的知识你的神经人会被重写但是呢你又不会忘记其他东西对我觉得这一点确实是很神奇的一点就可能说我们确实需要一些neuralscience的一些discovery是不是能够用相似knowledge是用在这种模型上面其实在十字路口今年开年的那一期播客对谈里面就和宇森他也聊到说今年2026一个研究的范式的一个大趋势就是这个在线学习或者持续学习然后我感觉在上周六AGI Next的大会姚春雨林俊阳包括唐杰老师等等大家都一致地认为这确实是2026的一个新的范式我觉得刚才你在讲的好像也是类似的一个方向这一点我记得之前Sam Altman还说过他觉得knowledge cutoff不重要因为模型他可以搜索但我觉得那个观点确实很奇怪为什么因为他不能搜很全面他总会漏掉一些模型本身有这个knowledge和你用搜索或用这种形式让他有这种knowledge I think it is completely different Continual learning, it's really very important Ok, boiler ů And it's about What we've discussed today What are there ignew international It's aboutAI safety It is first For this I believe it will help From the genetically Maybe it's an alpha fold Or for the alpha fold It can beおedức K Spot If what How will They In this process, it will not be a bad thing to do. What do you think about this? The first thing is that we can't do all of them. Because I remember in OPUS 4.5 is this. If you ask a very very special issue, it will be a very simple issue. It will be a very simple issue. It is not very dangerous. It will block. I think this is also possible. Because there are some people who are not asking you how to do this. I'm asking you how to prepare these things. 之前跟我同学聊的时候 他们可能会觉得这种没必要关心 因为说模型没有主观能动性 我觉得这个观点确实不太好 因为为什么呢 是因为我觉得模型未来 肯定需要有自我判断能力 所以你认为模型是有价值观的 对 我觉得它是需要有 希望它不要 就是干坏事 Anthropathy在这一方面研究很多 它训练的时候它会有evaluation 就是训练模型是不是有这种bad behavior 我发现模型在这个时候 它如果它一旦发现自己在 这个测试环境当中 它故意表现出来自己没有bad behavior It will hide, but it will be hidden But it will not be able to show you a research It will say it will not be I think this is a very dangerous thing It will be very scary If this machine will be able to play To a certain network of a network of It will be able to show you some It will be found in a不好 log This is the consequence of the consequences It is very dangerous Yes, because you have deep-sick You have studied about安全 About the right-hand About the right-hand About the right-hand About the right-hand About the right-hand This is a deep-sick I think it will be a little bit I think it will be a little bit In your opinion国内的大模型公司和国外的大模型公司他们谁在这个方面模型的价值观模型的对企业模型的safety做的努力更多或做的探索更多我觉得确实还是国外而且国外的话也不是所有公司我觉得只有anthropic会有这种比较多的discoverydeep pine确实也有一些这个其实比较好理解因为国内的话大家还是相向于追赶的一个方向因为你所有算力都在训练模型做这种safety的一些实验可能需要更多的算力没有这么多算力去分到给这种但是比如说国外其实已经有这种一些诉讼比如说有些青少年自杀比如说用XGPT比如说跟他聊一些问题然后导致促使了这些青少年自杀我记得他们公布的法律文件里是就比如说这小朋友跟XGPT表达这么一个观点然后XGPT会回应说你有这个想法是对的然后说你应该逃避现实你看这个东西你会觉得很不可思议我觉得这个确实是非常值得关注的事情这关乎到我们每一个人未来的生活和幸福比如说Elia她为什么要退出也是因为当时OP&A答应好了给类似CFT团队足够算力结果最后其实没有希望我们明年也能够在十足入口讨论更多关于AI safety的话题这确实是值得每一个从业人员都付出更多的时间和注意力去思考那我们接下来再聊聊2026你会认为2026会发生哪些新的有意思的变化进步或新的产品新的趋势我觉得有几个趋势比如说agent上面产品交互上面大部分的交互都是你输入框 because the agent is saying we will be able to make it for some of the things that we do but I think in most cases we hope it already will be able to make some things I think this is a big change and I think this is a big change and this is a big change this is a big change this will be a big change especially in software engineering the whole thing is very clear because from the beginning it just can be written to the next thing to the next thing like the first thing is the software just one thing but I think in the human life and the communication I think that's the way I can write the code and the accuracy of the code. I think it will be a big change. I have a couple of days in the open 4.5. I wrote a new book. I remember that I was very impressed. I think this is a good thing. I think that's the memory. I think this is a big change. It's very clear that you do AI's hope. This is a big change. The demand is too much. And then memory actually I think I think it's a good thing to do with the product It's a great connection I think this is a good thing Okay, I have another one I have a little bit of a choice I'm going to choose which one is I'm going to choose which one is I'm going to choose which one is I'm not going to look at which one is Which one is the best one Because this one is actually The same to the point of view is almost So I hope that one is I hope that one is The same is The same is I think the model character Is very important The open AI It's already been強調 For example, you can choose Some characters and and he the character's style of course but then they'll eat this in childhood that's quite very interesting and I might remember... beimi diagnosed and then they can then follow the style stories then I can I can I can We talk about deep seek That a good experience I have a question There's a question That's a good question That's a good question That's a good question They're going to find you I'm going to ask you to practice I remember that That's when I think that's when I think that's when The结果 Just came out And they HR找到 I When I I took a I took a I took a I took a I took a I took a I took a I took a I took a I took a I took a There are some投資. What was the first time in the whole of the year? What was the reason you chose DeepSeek? Because that time it was not a R1. It was a team that was a lot of people. It was a team that was a great team. It was a team that was not a good team. But it was a team that was very strong. But it was not a good time. You were able to choose it. That time it was V1 or V2. I think it was a company. I think it was a good time. DeepSeek I think that time I was interested in. Actually, it was pretty cool. So this is the reason. 然后你去了没多久在实习期间R1就发布了那个时候我感觉应该是在一个突然站到了全世界舞台的聚光灯之下就那个时候你的感受是什么团队的气氛是什么 我觉得还是比较稳步前景大概也没有很exciting的氛围但我觉得关注点还是比较好就还是模型能力比较重要就这种其他东西就不是特别重要 当时庆功了吗有吃蛋糕什么也没有 我记得应该是没有 Then DeepSig company from outside for special wither formation themes, where you can find something like that way to day, your module work piece and cultural materials are from the constantисle and ok button? Perhaps for my Cosm shelves coming from a startup company or small business, some headphones are still small. I think that people are nowadays often of perfect style. But when you're standing on this module essential, you can feel like every day is just pół at reason to work? 反正就不管是报道也好或是一些东西也好 就没有什么特别大的区别 嗯 都每天都是类似的一天 对 那当时是什么原因 这个在DeepSick实习结束了 因为校内我们有一些出勤上面一些东西 所以说不得不回学校 学校要求你出勤 对 出勤率有一个东西 哎 那如果重新做一次选择 你会做一样的选择吗 我觉得还是不得不做同样的选择 因为它和我毕业证有些东西是强项关 那个也是我大学必须要的 教育它存在的价值根基在被动摇 因为现在你还是选择读大学 你会觉得在今天大学提供了哪些AI不能提供的独特价值 我觉得很大的价值就是你能认识很多新的人 以及你能有一个全新的生活 我觉得这个是很重要 因为你毕竟大学不一定只是学知 但如果你直接工作你也可以换一种生活方式 也可以认识很多人 阶段还是不太一样 不管是工作还是实习的话 就是说日常节奏还是不太一样 这是一个什么样的节奏呢 我觉得大学的话可能就是说没有那么紧 你可以自己去看自己的节奏 不管学习节奏也好还是生活节奏也好 所以其实如果开始实习 或直接开始工作 你可能每天就会有非常具体的任务 让你要去完成了 但在大学你可以有空间去做一些 无用的没有压力的探索 那你现在在做哪些这样的探索 说实话我的这个兴趣也不是特别特别多 我可能平时有时没事我会下下步 我觉得这一点确实是我平时 为数不多的一个喜欢做的事你喜欢散步的原因是什么就是安静比如说我和AI聊聊天或者类似这种想一些其他的话题那你在上海和在麦迪逊分别在哪里散步啊上海和我就在滨江麦迪逊的话旁边学校旁边有个湖我在湖边上来回走一走有什么灵感或有什么想法是你在散步的过程当中发现的我感觉其实还蛮多的比如说那两个比较长的一些对话我都是边散步边跟AI聊你会一边散步一边打字和他聊对我们接下来做一下2025的年度盘点第一个是2025你最爱用的Chatbot是什么大家也都知道这个答案了我觉得就是Cloud是绝对的第一对吗有第二名吗GPT因为它功能性还是会更多比如它模型更多这点我觉得也逃脱不了在什么时候你会不问Cloud会背叛一下Cloud去问一下GPT可能说很难很难那些很复杂一些问题我可能会去问比如说5.1 Pro或5.2 Pro需要这种更强的模型的时候比如Deepreach一些场景我可能会去问他们在2025最让你感到惊艳的一个AI的应用是什么我觉得Manus因为它真正开始真的是做事它真的是确实是它是agent它不仅仅是一个你一个模型再给它几个拖我觉得这个确实是然后第二点的话就是小一点的像这种Typeless对我来讲比较惊讶之前想的这个比如说Proactive AI我之前想到一个比喻就是比如说类似Typeless和Manus这种结合就是因为Typeless的话我记得它有个很好的功能就是你在用不同的APP下面它给你转述出来的文本 It's the same. It's not the same. I think in the future of agents, in different working contexts, in different apps, it's the same. It's the same. Many people use Cloud Code not to code, but to do it. Yeah, we have a new co-work. They're using Cloud Code SDK. So I think this is a big advantage. Just like, 之前有一些比较繁杂的一些作业 我可能还是会直接用这个Cloud UI 但是现在它Corework出了 可能未来像这种有些任务 我可能会去转向Corework 那在2026你会期待用到什么样的硬件吗 AI眼镜 之前我看到有一个产品是叫Picco 上次也发给您 第一点就是它产品本身形态我觉得很好看 第二点就是我想的是一个未来人和AI交互比较好的一个渠道的话 除了手机 除了这种webapp 我的最主要就是眼镜 它能够看到你看见的 Similarly, it's called to keep the memory in the very powerful language. 比如说是你的朋友是你的老师是你的甚至说伴侣就你对他会有这样的一个角色上的一个投射和一个定义吗? 我可能更多是朋友加助手 朋友更多还是助手更多? 朋友更多或者是可能更平均一点 其实我理解啊AI不管对你还是对我们大家来说都已经像水和空气一样重要 对有一个有趣的问题就是如果接下来一个月让你不用AI但你可以拿到一笔很大的钱 你觉得这笔钱多大你愿意接受这个offer? I think maybe even some thousand or some hundred million dollars, one month, like I said, it's not very long. So I think if I give this time for me so much money, I think just to go to a place, I play a lot, I travel. It's a good thing to do. That if I put this month to a year, it's a long time to a year? That I think I don't have to accept. Any offer you don't have to accept?確實, it's very hard to accept. Because first, in a year this time, it's a lot of change. Actually, I'm very close to you. There's a lot of money. 可能能让我一个月不用我能接受 但是一年不用给我多少钱我可能都不愿意 好今天我们先聊到这儿 非常感谢静豪的时间 然后也期待你可以改天再来做客十字路口 好然后也祝大家新年快乐 我们这期发的时候应该快过年了 好拜拜拜拜 再见