Code for paper Contrastive Learning Reduces Hallucination in Conversations.
We propose MixCL, a contrastive learning framework to reduce the hallucination of LM-based knowledge-grounded dialogue systems.
The code for extrating spans is available at mixup.py
, where we use stanza and spacy to identify entities and constituencies in text.
The code for model training and testing is available at run.py
The dataset (i.e., Wizard-of-Wikipedia) is placed in /dataset
, and /utils
provides the code for IO and evaluation.
- Code for dataset pre-processing: https://github.com/sunnweiwei/GenKS/blob/main/process_wizard.py
- Pre-processed datasets are shared at https://drive.google.com/file/d/1ccPi-f8x_yqvVkGVN8rnNkkevrVFyY3D/view?usp=drive_link
We provide an example of the outputs of models on WoW seen at outputs_on_seen.txt
@inproceedings{Sun2023ContrastiveLR,
title={Contrastive Learning Reduces Hallucination in Conversations},
author={Weiwei Sun and Zhengliang Shi and Shen Gao and Pengjie Ren and M. de Rijke and Zhaochun Ren},
booktitle={AAAI Conference on Artificial Intelligence},
year={2023},
pages={13618--13626}
}