Skip to content
星际流动

Decoding by Perturbation: Mitigating MLLM Hallucinations via Dynamic Textual Perturbation

发布
采集
学术前沿 3.0 分 — Moderate AI relevance +novelty(1) +practical(1)
原文: cs.CL updates on arXiv.org

评分 3.0 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-15

评分依据:Moderate AI relevance +novelty(1) +practical(1)

arXiv:2604.12424v1 Announce Type: new Abstract: Multimodal Large Language Models frequently suffer from inference hallucinations, partially stemming from language priors dominating visual evidence. Existing training-free mitigation methods either perturb the visual representation and deviate from the natural image distribution, or enforce intrusive manipulations that compromise the model’s inherent generative fluency. We introduce a novel perspective that multimodal hallucination manifests as…