llm-hallucination-survey
This survey examines the issue of hallucination in large language models, focusing on how generated content can diverge from user inputs or factual information. It reviews evaluation methods, explanation frameworks, and mitigation strategies, addressing input-conflicting, context-conflicting, and fact-conflicting hallucinations. The article highlights studies and benchmarks across domains such as machine translation and summarization, providing insights into research that seeks to enhance the factual reliability of these models.