Aaron Pache, Mark CW van Rossum
View original ↗Implement an energy-efficient training layer that uses 'mistake gating' to skip backpropagation on correctly classified samples. This is vital for on-device incremental learning.
Suggested repo: gate-learn
"Reduce training energy consumption by 50% by teaching your model to ignore what it already knows."
Estimated effort: 45h