Hallucinate - GPT - LLM - AI Chat - OpenAI - Sam Altman info
-
Updated
Jan 1, 2024
Hallucinate - GPT - LLM - AI Chat - OpenAI - Sam Altman info
Antibodies for LLMs hallucinations (grouping LLM as a judge, NLI, reward models)
[TruthGPT](https://github.com/SingularityLabs-ai/TruthGPT-mini) for google
This repo aims to remove/minimize hallucination introduced through large language models in development of KG
Hierarchical Gaussian Filter (HGF) model of conditioned hallucinations task (Powers et al 2017)
The purpose of this application is to test LLM-generated interpretations of medical observations. The explanations are generated fully automatically by a large language model. This application should be used for experimental purposes only. It does not provide support for real world cases and does not replace advice from medical professionals.
Codes related to the paper "On hallucinations in tomographic imaging"
Code for "The Curious Case of Hallucinations in Neural Machine Translation".
This is the repository for the paper "DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models"
The full pipeline of creating UHGEval hallucination dataset
Code for PARENTing via Model-Agnostic Reinforcement Learning to Correct Pathological Behaviors in Data-to-Text Generation (Rebuffel, Soulier, Scoutheeten, Gallinari; INLG 2020)
A PyTorch implementation of the paper Thinking Hallucination for Video Captioning.
Code for Controlling Hallucinations at Word Level in Data-to-Text Generation (C. Rebuffel, M. Roberti, L. Soulier, G. Scoutheeten, R. Cancelliere, P. Gallinari)
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models
The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
An Easy-to-use Hallucination Detection Framework for LLMs.
Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"
Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.
[ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"
Add a description, image, and links to the hallucinations topic page so that developers can more easily learn about it.
To associate your repository with the hallucinations topic, visit your repo's landing page and select "manage topics."