Eliza Berman, Bella Chang, Daniel B. Neill, Emily Black
View original ↗Implement a library for evaluating attribution bias in LLMs. Developers can use this to stress-test their RAG systems or search-augmented LLMs for demographic or fame-based bias.
Suggested repo: AttriCheck
"Is your RAG system biased? Benchmark your LLM's quote attribution accuracy."
Estimated effort: 45h