Create an interpretability dashboard that highlights which parts of a prompt or latent space led to specific LM outputs. Help devs 'read' their models better.