00:01:26 AI的“心里话”,为什么也可能是装出来的?
00:06:38 AI的“记忆力”,如何才能赶上一本小说?
00:12:26 AI陪练:笨功夫如何“点化”天才?
00:16:27 AI搞装修:如何盖一座又高又省钱的“摩天楼”?
00:20:34 小模型,大智慧:AI的“学徒”进化论
本期介绍的五篇论文:
[LG] Chain-of-Thought Is Not Explainability
[Oxford & WhiteBox]
https://arxiv.org/abs/2025.02
---
[LG] Arctic Long Sequence Training: Scalable And Efficient Training For Multi-Million Token Sequences
[Snowflake AI Research]
https://arxiv.org/abs/2506.13996
---
[CL] ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models
[NVIDIA]
https://arxiv.org/abs/2505.24864
---
[LG] Don't be lazy: CompleteP enables compute-efficient deep transformers
[Cerebras Systems & ETH Zurich]
https://arxiv.org/abs/2505.01618
---
[CL] Distilling LLM Agent into Small Models with Retrieval and Code Tools
[KAIST & KRAFTON]
https://arxiv.org/abs/2505.17612