Brady Steele
View original ↗Build a diagnostic dashboard for LoRA fine-tuning that detects 'un-learning' on controversial training data. This helps devs prune datasets that induce performance degradation.
Suggested repo: EntropyTune
"Find out which training examples are actually ruining your fine-tuning run."
Estimated effort: 25h