Despite their impressive generative capabilities, LLMs are hindered by fact-conflicting hallucinations in real-world applications. The accurate identification of hallucinations in texts generated by LLMs, especially in complex inferential scenarios, is a relatively unexplored area. To address this gap, we present FactCHD, a dedicated benchmark designed for the detection of fact-conflicting hallucinations from LLMs. FactCHD features a diverse dataset that spans various factuality patterns, including vanilla, multi-hop, comparison, and set opera- tion. A distinctive element of FactCHD is its integration of fact-based evidence chains, significantly enhancing the depth of evaluating the detectors’ explanations. Experiments on different LLMs expose the shortcomings of current approaches in detecting factual errors accurately. Furthermore, we introduce TRUTH-TRIANGULATOR that synthesizes reflective considerations by tool-enhanced ChatGPT and LoRA-tuning based on Llama2, aiming to yield more credible detection through the amalgamation of predictive results and evidence.
Figure 1: Illustration of fact-conflicting hallucination detection example from FACTCHD, where the green part represents factual explanation core (body part) in the chain of evidence.
Figure 2: Overview of the construction process of FactCHD.
Figure 3: Overview TRUTH-TRIANGULATOR. Here we designate the “Truth Guardian” based on Llama2-7B-chat-LoRA while the “Truth Seeker” based on GPT-3.5-turbo (tool) in our experiments. We want the “Fact Verdict Manager” to collect evidence from different viewpoints to enhance the reliability and accuracy of the obtained conclusion.
Table 1: Results on FACTCLS and EXPMATCH (abbreviated as CLS. and EXP.) along with FACTCHD estimated by each method.
Figure 2: Case analysis of out-of-distribution examples from ChatGPT using TRUTH-TRIANGULATOR.
@article{chen2024factchd,
title={FactCHD: Benchmarking Fact-Conflicting Hallucination Detection},
author={Xiang Chen and Duanzheng Song and Honghao Gui and Chenxi Wang and Ningyu Zhang and
Jiang Yong and Fei Huang and Chengfei Lv and Dan Zhang and Huajun Chen},
year={2024},
eprint={2310.12086},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.