latexr
View original ↗Develop an adversarial framework to test LLM susceptibility to fabricated disinformation. This tool would help security teams audit model grounding before deployment.
Suggested repo: hallucination-stress
"Test if your AI believes in fake realities."
Estimated effort: 60h