decide1000
View original ↗Create an 'Agent-Trust' middleware that decouples the LLM's safety-check logic from the execution environment. This would allow users to define their own 'sandbox security profiles' to prevent over-zealous blocking by proprietary models.
Suggested repo: agent-trust
"Control your AI's conscience."
Estimated effort: 30h