Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger, William A. P. Smith, Yue Lu
View original ↗Implement a proximal decoupling optimization framework for continual learning. This provides a clean interface for researchers to experiment with task-specific parameter protection without modifying gradient updates.
Suggested repo: prox-decouple
"Learn new tricks without forgetting your old ones."
Estimated effort: 35h