00:00:29 解锁AI推理能力:少即是多的秘密
00:05:17 AI巨兽的“成长配方”:如何让小个子智慧指导大块头?
00:10:03 AI作画的“临门一脚”:如何让快马跑得又快又好?
00:14:05 AI 创造 3D 模型:从“盖房子”到“捏泥人”
00:07:44 AI界的“小灶”,如何喂出绝顶高手?
本期介绍的五篇论文:
[LG] Part I: Tricks or Traps? A Deep Dive into RL for LLM Reasoning
[Alibaba Group]
https://arxiv.org/abs/2508.08221
---
[LG] μ-Parametrization for Mixture of Experts
[University of Warsaw]
https://arxiv.org/abs/2508.09752
---
[LG] Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models
[Technical University of Munich & Inceptive]
https://arxiv.org/abs/2508.09968
---
[CV] VertexRegen: Mesh Generation with Continuous Level of Detail
[Meta Reality Labs Research & UC San Diego]
https://arxiv.org/abs/2508.09062
---
[LG] Beyond Scaling Law: A Data-Efficient Distillation Framework for Reasoning
[Zhongxing Telecom Equipment (ZTE)]
https://arxiv.org/abs/2508.09883
---
[CL] Speed Always Wins: A Survey on Efficient Architectures for Large Language Models
[Unknown / Survey paper]
https://arxiv.org/abs/2508.09834