AlphaWeaver
View original ↗Build a prompt-caching middleware that dynamically swaps models based on expected task complexity, using small models for simple reasoning and triggering larger models for complex logic.
Suggested repo: model-router
"Optimize your LLM spend and latency with smart routing."
Estimated effort: 70h