Skip to content

Contrastive Learning Reduces Hallucination in Conversations

Notifications You must be signed in to change notification settings

sunnweiwei/MixCL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contrastive Learning Reduces Hallucination in Conversations

Code for paper Contrastive Learning Reduces Hallucination in Conversations.

We propose MixCL, a contrastive learning framework to reduce the hallucination of LM-based knowledge-grounded dialogue systems.

Figure

Models

The code for extrating spans is available at mixup.py, where we use stanza and spacy to identify entities and constituencies in text.

The code for model training and testing is available at run.py

Datasets

The dataset (i.e., Wizard-of-Wikipedia) is placed in /dataset, and /utils provides the code for IO and evaluation.

Evaluation

We provide an example of the outputs of models on WoW seen at outputs_on_seen.txt

Cite

@inproceedings{Sun2023ContrastiveLR,
  title={Contrastive Learning Reduces Hallucination in Conversations},
  author={Weiwei Sun and Zhengliang Shi and Shen Gao and Pengjie Ren and M. de Rijke and Zhaochun Ren},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2023},
  pages={13618--13626}
}

About

Contrastive Learning Reduces Hallucination in Conversations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages