
Deepseek R1-zero 复制品售价 30 美元?这位学生如何在数学推理方面取得突破?
复制深度探测R1-Zero的方法
潘佳怡,伯克利的博士生,仅用$30就复制了深度探测R1-Zero的方法,使得一个3B参数的小模型在一个数学游戏中的表现取得了突破,结果必须反向推导以形成方程。
来源: 潘佳怡的推特
数据集由多个输入数字和一个目标输出数字组成:
来源: Hugging Face 数据集
每个数据点被转化为一个任务提示,要求进行逐步计算,推理过程被包含在<think>
和</think>
之间。具体的提示模板可以在这里找到。
def make_prefix(dp, template_type):
target = dp['target']
numbers = dp['nums']
if template_type == 'base':
"""This works for any base model"""
prefix = f"""A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer.
User: Using the numbers {numbers}, create an equation that equals {target}. You can use basic arithmetic operations (+, -, *, /) and each number can only be used once. Show your work in <think> </think> tags. And return the final answer in <answer> </answer> tags, for example <answer> (1 + 2) / 3 </answer>.
Assistant: Let me solve this step by step.
<think>"""
elif template_type == 'qwen-instruct':
"""This works for Qwen Instruct Models"""
prefix = f"""<|im_start|>system\\nYou are a helpful assistant. You first thinks about the reasoning process in the mind and then provides the user with the answer.<|im_end|>\\n<|im_start|>user\\n Using the numbers {numbers}, create an equation that equals {target}. You can use basic arithmetic operations (+, -, *, /) and each number can only be used once. Show your work in <think> </think> tags. And return the final answer in <answer> </answer> tags, for example <answer> (1 + 2) / 3 </answer>.<|im_end|>\\n<|im_start|>assistant\\nLet me solve this step by step.\\n<think>"""
return prefix
强化学习(RL)算法是使用 veRL 实现的,这是一个开源框架,消除了编写RL算法和模型训练代码的需要——只需要一个奖励函数。在这种情况下,唯一使用的奖励规则是每个问题的目标输出数字作为“真实值”。
question = make_prefix(example, template_type=args.template_type)
solution = {
"target": example['target'],
"numbers": example['nums']
}
data = {
"data_source": data_source,
"prompt": [{
"role": "user",
"content": question,
}],
"ability": "math",
"reward_model": {
"style": "rule",
"ground_truth": solution
},
"extra_info": {
'split': split,
'index': idx,
}
}
随着训练的进行,模型“从虚拟输出开始,但逐渐发展出修正和搜索等策略”。即使从一个小模型如Qwen-2.5 1.5B开始,这种转变也被观察到,发现多种RL算法都是有效的。
在我看来,这证实了深度探测R1最重要但常被误解的发现:为模型提供正确的目标——而不明确指导它如何实现——可以使强化学习增强预训练语言模型的性能,达到与使用传统方法训练的更大模型相当的水平,例如ChatGPT-4o。