Anne Trafton | MIT News
View original ↗Develop an uncertainty-aware wrapper for medical LLMs that forces the model to output confidence intervals or source citations before answering. This is crucial for avoiding 'confident hallucinations' in diagnostic tools.
Suggested repo: humbleAgent
"AI that knows when it doesn't know."
Estimated effort: 30h