Xiaoyu Xu, Yulan Pan, Xiaosong Yuan, Zhihong Shen, Minghao Su, Yuanhao Su, Xiaofeng Zhang
View original ↗Build a library that visualizes reasoning traces by mapping attention-gradient scores directly to chain-of-thought steps. This would help developers debug why specific logic chains fail or hallucinate.
Suggested repo: step-saliency
"See exactly which reasoning step led your model astray."
Estimated effort: 40h