There is a lack of open-source tooling that automatically audits and enforces the specific safety heuristics outlined in the Child Safety Blueprint. Developers can build a middleware layer for LLM applications that wraps existing models with these specific pedagogical and safety guardrails.
Suggested repo: childguard-ai
"Turn OpenAI's safety blueprint into a drop-in middleware for your LLM apps."
Estimated effort: 40h