Tag: 幻觉
All the articles with the tag "幻觉".
- 6.6
Hackers or Hallucinators? A Comprehensive Analysis of LLM-Based Automated Penetration Testing
arXiv:2604.05719v1 Announce Type: cross Abstract: The rapid advancement of Large Language Models (LLMs) has created new opportunities for Automated...
- 6.4
From Hallucination to Structure Snowballing: The Alignment Tax of Constrained Decoding in LLM Reflection
arXiv:2604.06066v1 Announce Type: new Abstract: Intrinsic self-correction in Large Language Models (LLMs) frequently fails in open-ended reasoning ...