00:01:50 人工智能界的“分工”智慧:如何让天才更天才?
00:06:40 人工智能预测那么准,它真的“懂”了吗?
00:11:52 人工智能的“最强大脑”?不,是“最省大脑”
00:16:14 训练人工智能,少吃多餐还是狼吞虎咽?
00:20:39 从笨拙到精通:机器人如何“看”会我们的本事?
本期介绍的五篇论文:
[LG] Towards Solving More Challenging IMO Problems via Decoupled Reasoning and Proving
[Tencent AI Lab]
arxiv.org
---
[LG] What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models
[Harvard University & MIT]
arxiv.org
---
[CL] Decoder-Hybrid-Decoder Architecture for Efficient Reasoning with Long Generation
[Microsoft]
arxiv.org
---
[LG] Small Batch Size Training for Language Models: When Vanilla SGD Works, and Why Gradient Accumulation Is Wasteful
[New York University & Columbia University]
arxiv.org
---
[LG] Value from Observations: Towards Large-Scale Imitation Learning via Self-Improvement
[Google Deepmind]
arxiv.org