Create a high-quality, up-to-date practical curriculum for LLM alignment and fine-tuning. Most existing tutorials are outdated; focus on LoRA/QLoRA, distillation, and modern inference stacks.
Suggested repo: llmMastery
"From zero to finetuning your own LLM in one weekend."
Estimated effort: 50h