Machine Translation Hallucination Detection for Low and High Resource Languages using Large Language Models | EMNLP Findings 2024
-
Updated
Oct 2, 2024
Machine Translation Hallucination Detection for Low and High Resource Languages using Large Language Models | EMNLP Findings 2024
Hallucinate - GPT - LLM - AI Chat - OpenAI - Sam Altman info
The full pipeline of creating UHGEval hallucination dataset
Antibodies for LLMs hallucinations (grouping LLM as a judge, NLI, reward models)
The purpose of this application is to test LLM-generated interpretations of medical observations. The explanations are generated fully automatically by a large language model. This application should be used for experimental purposes only. It does not provide support for real world cases and does not replace advice from medical professionals.
langchain tutorial using gemini
Code for "The Curious Case of Hallucinations in Neural Machine Translation".
[TruthGPT](https://github.com/SingularityLabs-ai/TruthGPT-mini) for google
This repo aims to remove/minimize hallucination introduced through large language models in development of KG
This is the repository for the paper DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models (EMNLP2024 findings)
[ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"
Code for PARENTing via Model-Agnostic Reinforcement Learning to Correct Pathological Behaviors in Data-to-Text Generation (Rebuffel, Soulier, Scoutheeten, Gallinari; INLG 2020)
Hierarchical Gaussian Filter (HGF) model of conditioned hallucinations task (Powers et al 2017)
Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"
The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
A PyTorch implementation of the paper Thinking Hallucination for Video Captioning.
mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating
Code for Controlling Hallucinations at Word Level in Data-to-Text Generation (C. Rebuffel, M. Roberti, L. Soulier, G. Scoutheeten, R. Cancelliere, P. Gallinari)
An Easy-to-use Hallucination Detection Framework for LLMs.
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models
Add a description, image, and links to the hallucinations topic page so that developers can more easily learn about it.
To associate your repository with the hallucinations topic, visit your repo's landing page and select "manage topics."