awesome-hallucination-detection
Discover methods for detecting and mitigating hallucinations in large language models. This resource covers uncertainty estimation, graph evaluations, and multimodal detections, providing insights into model reliability with vast datasets and diverse metrics. It includes solutions such as context-aware decoding and interactive alignment to address factual inconsistencies, vital for developers optimizing AI's factual accuracy and trustworthiness.