Yisheng Zhong, Sijia Liu, Zhuangdi Zhu
View original ↗Create a modular unlearning library that implements bidirectional logit distillation. It should help researchers strip specific knowledge from models while retaining general reasoning ability.
Suggested repo: forget-logits
"Surgical LLM unlearning with minimal utility loss."
Estimated effort: 30h