Build a simplified abstraction layer that makes Oumi-style training pipelines modular and portable via MCP (Model Context Protocol). Focus on low-code UI for managing fine-tuning jobs on ephemeral GPU clusters.
Suggested repo: train-flow
"Fine-tune any open-source model through a standardized, MCP-enabled interface."
Estimated effort: 60h