Develop a framework for probing internal 'uncertainty' in LLMs. Such a tool helps identify the boundary between model knowledge and hallucination by forcing the model to explicitly admit ignorance.