评分 5.7 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-08
评分依据:有一定参考价值的AI研究论文
arXiv:2604.05250v1 Announce Type: cross Abstract: Masked Diffusion Models (MDMs) offer a promising alternative to autoregressive language models by enabling parallel token generation and bidirectional context modeling. However, their inference speed is significantly limited by the inability to cache key-value pairs due to bidirectional attention, requiring $O(N^2)$ computations at each generation step. While recent methods like FastDLLM and DkvCache improve inference speed through attention approximations and caching strategies, they achieve speedups at the cost of generation quality. We propo