00:01:37 你的 AI,是“记性好”还是“真会学”?
00:06:19 管好AI的“注意力”:一个“分而治之”的智慧
00:10:22 造个“世界”给AI:为什么光看视频还不够?
00:16:51 AI团队管理的终极难题:既要、又要、还要,怎么办?
00:21:45 想把AI训练好?别等“秋后算账”
今天介绍的五篇论文:
[LG] Memory Mosaics at scale
[New York University & FAIR, Meta Inc]
https://arxiv.org/abs/2507.03285
---
[CL] RAT: Bridging RNN Efficiency and Attention Accuracy in Language Modeling
[CLAIRE, EPFL & Google DeepMind]
https://arxiv.org/abs/2507.04416
---
[LG] Critiques of World Models
[CMU]
https://arxiv.org/abs/2507.05169
---
[LG] Optimas: Optimizing Compound AI Systems with Globally Aligned Local Rewards
[Stanford University]
https://arxiv.org/abs/2507.03041
---
[LG] Discrete Diffusion Trajectory Alignment via Stepwise Decomposition
[Stanford University & Caltech]
https://arxiv.org/abs/2507.04832