hallucination-leaderboard
Learn about the Hughes Hallucination Evaluation Model (HHEM-2.1) and its role in measuring hallucination rates in large language models. This regularly updated leaderboard offers insights into factual consistency and summarization accuracy, presenting detailed rankings and methodologies for performance evaluation. A resourceful tool for those interested in advances in reducing hallucinations in LLMs and improving factual summaries. Comprehensive datasets and prior research are also available for further exploration of LLM evaluation.