hypedarhypedar
feedtrendsdiscovershowcasearchive
login
login
login
FeedTrendsDiscoverShowcaseArchiveDashboard
Submit Showcase

Trending now

Security + Agents + Infrastructure60Claude + Agents40Security + Vulnerability35
View all trends →

hypedar

AI trend radar for developers. Catch emerging papers, repos, and discussions before the hype peaks.

AboutGitHubDiscord

By the makers of hypedar

Codepawl

Open-source tools for developers.

Explore our tools →
AboutPrivacyTermsX

© 2026 Codepawl

Built by Codepawl·© 2026

About·Terms·Privacy·Security

GitHub·Discord·X

feedtrendsdiscovershowcasearchive
← feed
arXiv3h ago
5.3

Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules

Cameron Pattison, Lorenzo Manuali, Seth Lazar

View original ↗

Analysis

Viral velocity
low
Implementation gapYES
Novelty8/10
Categorydiscussion
Topics
reasoningalignment

Opportunity Brief

Develop a framework for model alignment that allows users to define custom 'moral reasoning' schemas. This would enable local models to ignore illegitimate rules without breaking safety protocols.

Suggested repo: defiant-ai

"Teaching AI to discern unjust rules."

Estimated effort: 100h