Xin Liu, Lu Wang
View original ↗Build an inference wrapper that implements 'Reasoning Calibration' for LLMs to estimate confidence per-token. This stops models from confidently lying in long-form tasks.
Suggested repo: calibra-gen
"Force your LLM to admit when it's just guessing."
Estimated effort: 50h