MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head Paper • 2601.07832 • Published Jan 12 • 52
Elastic Attention: Test-time Adaptive Sparsity Ratios for Efficient Transformers Paper • 2601.17367 • Published 27 days ago • 33
Why Attention Patterns Exist: A Unifying Temporal Perspective Analysis Paper • 2601.21709 • Published 22 days ago • 2
LLaDA2.1: Speeding Up Text Diffusion via Token Editing Paper • 2602.08676 • Published 11 days ago • 66
MOVA: Towards Scalable and Synchronized Video-Audio Generation Paper • 2602.08794 • Published 11 days ago • 152
OPUS: Towards Efficient and Principled Data Selection in Large Language Model Pre-training in Every Iteration Paper • 2602.05400 • Published 15 days ago • 320
When to Memorize and When to Stop: Gated Recurrent Memory for Long-Context Reasoning Paper • 2602.10560 • Published 9 days ago • 28
MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models Paper • 2602.10934 • Published 9 days ago • 49
BitDance: Scaling Autoregressive Generative Models with Binary Tokens Paper • 2602.14041 • Published 5 days ago • 40
STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious Tokens Paper • 2602.15620 • Published 3 days ago • 3
SLA2: Sparse-Linear Attention with Learnable Routing and QAT Paper • 2602.12675 • Published 7 days ago • 47