Build a tool that visualizes embedding shifts in LLMs post-bias-mitigation. This would help developers audit exactly how fine-tuning alters representation spaces for sensitive tokens.