Taylor Olson
View original ↗Develop a logic-constrained fine-tuning library that forces models to adhere to moral axioms using formal logic. Build a 'safety' layer that validates outputs against ethical frameworks.
Suggested repo: full-logic
"Inject formal ethics into your LLMs using logical constraints."
Estimated effort: 90h