00:01:54 四两拨千斤:让小模型变聪明的“内存魔法”
00:07:11 AI调教指南:不说假话的秘密,竟然是“相信自己”?
00:11:07 AI思考的“地图”与“导航”
00:15:16 AI大模型的“阿喀琉斯之踵”?
00:18:49 AI写作高手进阶:人多,不如方法好
本期节目介绍的五篇论文:
[CL] KV Cache Steering for Inducing Reasoning in Small Language Models
[University of Amsterdam & University of Technology Nuremberg]
https://arxiv.org/abs/2507.08799
---
[CL] The Curious Case of Factuality Finetuning: Models' Internal Beliefs Can Improve Factuality
[University of Washington]
https://arxiv.org/abs/2507.08371
---
[LG] CTRLS: Chain-of-Thought Reasoning via Latent State-Transition
[UC San Diego]
https://arxiv.org/abs/2507.081
---
[LG] One Token to Fool LLM-as-a-Judge
[Tencent AI Lab]
https://arxiv.org/abs/2507.08794
---
[LG] Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling
[Stanford University]
https://arxiv.org/abs/2507.08390