Chuxu Song, Zhencan Peng, Jiuqi Wei, Chuanhui Yang
View original ↗Build a drop-in replacement for KV-cache attention mechanisms that uses centroid-scoring to sparsify attention at inference time. This would significantly reduce memory footprints for long-context LLM inference.
Suggested repo: fast-centroid-attn
"Accelerate your long-context LLMs without training."
Estimated effort: 100h