Tianyi Wang, Yixia Li, Long Li, Yibiao Chen, Shaohan Huang, Yun Chen, Peng Li, Yang Liu, Guanhua Chen
View original ↗Implement Sequence-Level PPO to stabilize LLM training for long reasoning tasks without the overhead of heavy critic models. This addresses a major bottleneck in open-source reasoning model fine-tuning.
Suggested repo: sppo-llm
"Reasoning models that don't lose their train of thought—now with stable RL."
Estimated effort: 80h