00:00:38 给AI的大脑做减法:一种聪明的“偷懒”智慧
00:04:34 你的数据正在“减肥”:人工智能时代的生存新法则
00:09:23 AI的“闭关修炼”:不给它网络,它能有多强?
00:13:07 给AI装上一个“语法”导航仪
00:16:49 AI的“通感”:为什么“我写的”和“我是作者”是一回事?
本期介绍的几篇论文:
[LG] XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization
[UC Berkeley & FuriosaAI]
https://arxiv.org/abs/2508.10395
---
[LG] SoK: Data Minimization in Machine Learning
[ETH Zurich]
https://arxiv.org/abs/2508.10836
---
[CL] SSRL: Self-Search Reinforcement Learning
[Tsinghua University & Shanghai AI Laboratory]
https://arxiv.org/abs/2508.10874
---
[LG] Constrained Decoding of Diffusion LLMs with Context-Free Grammars
[ETH Zurich]
https://arxiv.org/abs/2508.10111
---
[CL] A Rose by Any Other Name Would Smell as Sweet: Categorical Homotopy Theory for Large Language Models
[Adobe Research]
https://arxiv.org/abs/2508.10018