hypedarhypedar
feedtrendsdiscovershowcasearchive
login
login
login
FeedTrendsDiscoverShowcaseArchiveDashboard
Submit Showcase

Trending now

Agents + Code Generation44Linux + Performance42Audio + Copyright + Ethics39
View all trends →

hypedar

AI trend radar for developers. Catch emerging papers, repos, and discussions before the hype peaks.

AboutGitHubDiscord

By the makers of hypedar

Codepawl

Open-source tools for developers.

Explore our tools →
AboutPrivacyTermsX

© 2026 Codepawl

Built by Codepawl·© 2026

About·Terms·Privacy·Security

GitHub·Discord·X

feedtrendsdiscovershowcasearchive

Feed

YHN1h ago
6.2

Every GPU That Mattered

high
hardwaredata-visualizationgpuhistorycompute
YHN7h ago
6.4

Why the majority of vibe coded projects fail

high
agentsllm-opssoftware-engineering
mit ai6h ago
4.8

Helping data centers deliver higher performance with less hardware

Researchers developed a system that intelligently balances workloads to improve the efficiency of flash storage hardware in a data center.

low
infrastructurestorageefficiencydatacenteroptimization
arXiv6h ago
5.1

Unveiling Language Routing Isolation in Multilingual MoE Models for Interpretable Subnetwork Adaptation

arXiv:2604.03592v1 Announce Type: new Abstract: Mixture-of-Experts (MoE) models exhibit striking performance disparities across languages, yet the internal mechanisms driving these gaps remain poorly understood. In this work, we conduct a systematic analysis of expert routing patterns in MoE models,

low
llmmoeanalysis
arXiv6h ago
5.0

MultiPress: A Multi-Agent Framework for Interpretable Multimodal News Classification

arXiv:2604.03586v1 Announce Type: new Abstract: With the growing prevalence of multimodal news content, effective news topic classification demands models capable of jointly understanding and reasoning over heterogeneous data such as text and images. Existing methods often process modalities indepen

low
agentsmultimodalclassification
arXiv6h ago
3.9

Text Summarization With Graph Attention Networks

arXiv:2604.03583v1 Announce Type: new Abstract: This study aimed to leverage graph information, particularly Rhetorical Structure Theory (RST) and Co-reference (Coref) graphs, to enhance the performance of our baseline summarization models. Specifically, we experimented with a Graph Attention Networ

low
nlpsummarization
arXiv6h ago
5.3

Rethinking Token Prediction: Tree-Structured Diffusion Language Model

arXiv:2604.03537v1 Announce Type: new Abstract: Discrete diffusion language models have emerged as a competitive alternative to auto-regressive language models, but training them efficiently under limited parameter and memory budgets remains challenging. Modern architectures are predominantly based

low
llmdiffusioninference
arXiv6h ago
5.1

LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

arXiv:2604.03532v1 Announce Type: new Abstract: Large language models (LLMs) show strong multilingual capabilities, yet reliably controlling the language of their outputs remains difficult. Representation-level steering addresses this by adding language-specific vectors to model activations at infer

low
llminferencesteering
arXiv6h ago
4.3

Cultural Authenticity: Comparing LLM Cultural Representations to Native Human Expectations

arXiv:2604.03493v1 Announce Type: new Abstract: Cultural representation in Large Language Model (LLM) outputs has primarily been evaluated through the proxies of cultural diversity and factual accuracy. However, a crucial gap remains in assessing cultural alignment: the degree to which generated con

low
llmalignmentevaluation
arXiv6h ago
5.3

Evolutionary Search for Automated Design of Uncertainty Quantification Methods

arXiv:2604.03473v1 Announce Type: new Abstract: Uncertainty quantification (UQ) methods for large language models are predominantly designed by hand based on domain knowledge and heuristics, limiting their scalability and generality. We apply LLM-powered evolutionary search to automatically discover

low
llmuncertaintyevolutionary-search
arXiv6h ago
5.1

Vocabulary Dropout for Curriculum Diversity in LLM Co-Evolution

arXiv:2604.03472v1 Announce Type: new Abstract: Co-evolutionary self-play, where one language model generates problems and another solves them, promises autonomous curriculum learning without human supervision. In practice, the proposer quickly converges to a narrow distribution of problems that sat

low
llmtrainingself-play
arXiv6h ago
4.8

The Tool Illusion: Rethinking Tool Use in Web Agents

arXiv:2604.03465v1 Announce Type: new Abstract: As web agents rapidly evolve, an increasing body of work has moved beyond conventional atomic browser interactions and explored tool use as a higher-level action paradigm. Although prior studies have shown the promise of tools, their conclusions are of

low
agentswebtool-use
arXiv6h ago
3.9

Towards a theory of morphology-driven marking in the lexicon: The case of the state

arXiv:2604.03422v1 Announce Type: new Abstract: All languages have a noun category, but its realisation varies considerably. Depending on the language, semantic and/or morphosyntactic differences may be more or less pronounced. This paper explores these variations, using Riffian as a reference point

low
linguisticsnlp
arXiv6h ago
4.3

Are Arabic Benchmarks Reliable? QIMMA's Quality-First Approach to LLM Evaluation

arXiv:2604.03395v1 Announce Type: new Abstract: We present QIMMA, a quality-assured Arabic LLM leaderboard that places systematic benchmark validation at its core. Rather than aggregating existing resources as-is, QIMMA applies a multi-model assessment pipeline combining automated LLM judgment with

low
evaluationinference
arXiv6h ago
4.8

Noise Steering for Controlled Text Generation: Improving Diversity and Reading-Level Fidelity in Arabic Educational Story Generation

arXiv:2604.03380v1 Announce Type: new Abstract: Generating diverse, pedagogically valid stories for Arabic early-grade reading assessments requires balancing tight constraints on vocabulary, reading level, and narrative structure against the need to avoid repetitive plots that undermine assessment v

low
inferencediffusion
arXiv6h ago
4.8

CresOWLve: Benchmarking Creative Problem-Solving Over Real-World Knowledge

arXiv:2604.03374v1 Announce Type: new Abstract: Creative problem-solving requires combining multiple cognitive abilities, including logical reasoning, lateral thinking, analogy-making, and commonsense knowledge, to discover insights that connect seemingly unrelated pieces of information. However, mo

low
evaluationreasoning
arXiv6h ago
5.3

Knowledge Packs: Zero-Token Knowledge Delivery via KV Cache Injection

arXiv:2604.03270v1 Announce Type: new Abstract: RAG wastes tokens. We propose Knowledge Packs: pre-computed KV caches that deliver the same knowledge at zero token cost. For causal transformers, the KV cache from a forward pass on text F is identical to what a joint pass on F+q would produce - this

low
raginference
arXiv6h ago
5.3

LPC-SM: Local Predictive Coding and Sparse Memory for Long-Context Language Modeling

arXiv:2604.03263v1 Announce Type: new Abstract: Most current long-context language models still rely on attention to handle both local interaction and long-range state, which leaves relatively little room to test alternative decompositions of sequence modeling. We propose LPC-SM, a hybrid autoregres

low
inferencereasoning
arXiv6h ago
5.0

VIGIL: An Extensible System for Real-Time Detection and Mitigation of Cognitive Bias Triggers

arXiv:2604.03261v1 Announce Type: new Abstract: The rise of generative AI is posing increasing risks to online information integrity and civic discourse. Most concretely, such risks can materialise in the form of mis- and disinformation. As a mitigation, media-literacy and transparency tools have be

low
agentsragmultimodal
arXiv6h ago
5.1

Why Attend to Everything? Focus is the Key

arXiv:2604.03260v1 Announce Type: new Abstract: We introduce Focus, a method that learns which token pairs matter rather than approximating all of them. Learnable centroids assign tokens to groups; distant attention is restricted to same-group pairs while local attention operates at full resolution.

low
inferenceattention
Show older items →