Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevationš and hallucination alleviationš.
Xun Liang1*, Shichao Song1*, Zifan Zheng2*, Hanyu Wang1, Qingchen Yu2, Xunkai Li3, Rong-Hua Li3, Yi Wang4, Zhonghao Wang4, Feiyu Xiong2, Zhiyu Li2ā
1RUC,
2IAAR,
3BIT,
4Xinhua
*Equal contribution,
ā Corresponding author (lizy@iaar.ac.cn)
Important
- Consider giving our repository a š, so you will receive the latest news (paper list updates, new comments, etc.);
- If you want to cite our work, here is our bibtex entry: CITATION.bib.
- 2024/10/26 We have created a relevant WeChat Group (å¾®äæ”ē¾¤) for discussing reasoning and hallucination in LLMs.
- 2024/09/18 Paper v3.0 and a relevant Twitter thread.
- 2024/08/24 Updated paper list for better user experience. Link. Ongoing updates.
- 2024/07/22 Our paper ranks first on Hugging Face Daily Papers! Link.
- 2024/07/21 Our paper is now available on arXiv. Link.
Welcome to the GitHub repository for our survey paper titled "Internal Consistency and Self-Feedback in Large Language Models: A Survey." The survey's goal is to provide a unified perspective on the self-evaluation and self-updating mechanisms in LLMs, encapsulated within the frameworks of Internal Consistency and Self-Feedback.
This repository includes three key resources:
- expt-consistency-types: Code and results for measuring consistency at different levels.
- expt-gpt4o-responses: Results from five different GPT-4o responses to the same query.
- Paper List: A comprehensive list of references related to our survey.
Click Me to Show the Table of Contents
Here we list the most important references cited in our survey, as well as the papers we consider worth noting. This list will be updated regularly.
These are some of the most relevant surveys related to our paper.
-
A Survey on the Honesty of Large Language Models
CUHK, arXiv, 2024 [Paper] [Code] -
Awesome LLM Reasoning
NTU, GitHub, 2024 [Code] -
Awesome LLM Strawberry
NVIDIA, GitHub, 2024 [Code] -
Extrinsic Hallucinations in LLMs
OpenAI, Blog, 2024 [Paper] -
When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of Self-Correction of LLMs
PSU, arXiv, 2024 [Paper] -
A Survey on Self-Evolution of Large Language Models
PKU, arXiv, 2024 [Paper] [Code] -
Demystifying Chains, Trees, and Graphs of Thoughts
ETH, arXiv, 2024 [Paper] -
Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies
UCSB, TACL, 2024 [Paper] [Code] -
Uncertainty in Natural Language Processing: Sources, Quantification, and Applications
Nankai, arXiv, 2023 [Paper]
For various forms of expressions from an LLM, we can obtain various forms of consistency signals, which can help in better updating the expressions.
-
Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs
NUS, ICLR, 2024 [Paper] [Code] -
Linguistic Calibration of Long-Form Generations
Stanford, ICML, 2024 [Paper] [Code] -
InternalInspector I2: Robust Confidence Estimation in LLMs through Internal States
VT, arXiv, 2024 [Paper] -
Cycles of Thought: Measuring LLM Confidence through Stable Explanations
UCLA, arXiv, 2024 [Paper] -
TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness
UoEdin, arXiv, 2024 [Paper] [Code] -
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
Oxford, ICLR, 2023 [Paper] [Code] -
Quantifying Uncertainty in Answers from any Language Model and Enhancing their Trustworthiness
UMD, arXiv, 2023 [Paper] -
Teaching models to express their uncertainty in words
Oxford, TMLR, 2022 [Paper] [Code] -
Language Models (Mostly) Know What They Know
Anthropic, arXiv, 2022 [Paper]
-
Investigating Factuality in Long-Form Text Generation: The Roles of Self-Known and Self-Unknown
Salesforce, arXiv, 2024 [Paper] -
Prompt-Guided Internal States for Hallucination Detection of Large Language Models
Nankai, arXiv, 2024 [Paper] [Code] -
Detecting hallucinations in large language models using semantic entropy
Oxford, Nature, 2024 [Paper] -
INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection
Alibaba, ICLR, 2024 [Paper] -
LLM Internal States Reveal Hallucination Risk Faced With a Query
HKUST, arXiv, 2024 [Paper] -
Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals
Fudan, arXiv, 2024 [Paper] -
Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method
SDU, NAACL, 2024 [Paper] [Code] -
LM vs LM: Detecting Factual Errors via Cross Examination
TAU, EMNLP, 2023 [Paper] -
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Cambridge, EMNLP, 2023 [Paper] [Code]
-
Enhancing Trust in Large Language Models with Uncertainty-Aware Fine-Tuning
Intel, arXiv, 2024 [Paper] -
Semantic Density: Uncertainty Quantification for Large Language Models through Confidence Measurement in Semantic Space
Cognizant, NeuIPS, 2024 [Paper] [Code] -
Generating with Confidence: Uncertainty Quantification for Black-box Large Language Models
UIUC, TMLR, 2024 [Paper] [Code] -
Uncertainty Estimation of Large Language Models in Medical Question Answering
HKU, arXiv, 2024 [Paper] -
To Believe or Not to Believe Your LLM
Google, arXiv, 2024 [Paper] -
Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models
DU, ACL, 2024 [Paper] [Code] -
Active Prompting with Chain-of-Thought for Large Language Models
HUST, arXiv, 2023 [Paper] [Code] -
Uncertainty Estimation in Autoregressive Structured Prediction
Yandex, ICLR, 2021 [Paper] -
On Hallucination and Predictive Uncertainty in Conditional Language Generation
UCSB, EACL, 2021 [Paper]
-
LLM Critics Help Catch LLM Bugs
OpenAI, arXiv, 2024 [Paper] -
Reasons to Reject? Aligning Language Models with Judgments
Tencent, ACL, 2024 [Paper] [Code] -
Self-critiquing models for assisting human evaluators
OpenAI, arXiv, 2022 [Paper]
-
Are self-explanations from Large Language Models faithful?
Mila, ACL, 2024 [Paper] -
On Measuring Faithfulness or Self-consistency of Natural Language Explanations
UAH, ACL, 2024 [Paper] [Code]
- Semantic Consistency for Assuring Reliability of Large Language Models
DTU, arXiv, 2023 [Paper]
Enhancing reasoning ability by improving LLM performance on QA tasks through Self-Feedback strategies.
-
SRA-MCTS: Self-driven Reasoning Augmentation with Monte Carlo Tree Search for Code Generation
BIT, arXiv, 2024 [Paper] [Code] -
Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions
Alibaba, arXiv, 2024 [Paper] [Code] -
Dynamic Self-Consistency: Leveraging Reasoning Paths for Efficient LLM Sampling
Virginia, arXiv, 2024 [Paper] -
Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains
CUHK, arXiv, 2024 [Paper] -
DSPy: Compiling Declarative Language Model Calls into State-of-the-Art Pipelines
Stanford, ICLR, 2024 [Paper] [Code] -
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
ETH, AAAI, 2024 [Paper] [Code] -
Integrate the Essence and Eliminate the Dross: Fine-Grained Self-Consistency for Free-Form Language Generation
BIT, ACL, 2024 [Paper] [Code] -
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
PKU, arXiv, 2024 [Paper] [Code] -
RATT: A Thought Structure for Coherent and Correct LLM Reasoning
PSU, arXiv, 2024 [Paper] [Code] -
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Stanford, arXiv, 2024 [Paper] [Code] -
Chain-of-Thought Reasoning Without Prompting
Google, arXiv, 2024 [Paper] -
Self-Contrast: Better Reflection Through Inconsistent Solving Perspectives
ZJU, ACL, 2024 [Paper] -
Training Language Models to Self-Correct via Reinforcement Learning
Google, arXiv, 2024 [Paper] -
LLMs cannot find reasoning errors, but can correct them given the error location
Cambridge, ACL, 2024 [Paper] -
Forward-Backward Reasoning in Large Language Models for Mathematical Verification
SUSTech, ACL, 2024 [Paper] [Code] -
LeanReasoner: Boosting Complex Logical Reasoning with Lean
JHU, NAACL, 2024 [Paper] [Code] -
Just Ask One More Time! Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios
Kuaishou, ACL, 2024 [Paper] -
Soft Self-Consistency Improves Language Model Agents
UNC-CH, ACL, 2024 [Paper] [Code] -
Self-Evaluation Guided Beam Search for Reasoning
NUS, NeuIPS, 2023 [Paper] [Code] -
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Princeton, NeuIPS, 2023 [Paper] [Code] -
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Google, ICLR, 2023 [Paper] -
DSPy Assertions: Computational Constraints for Self-Refining Language Model Pipelines
Stanford, arXiv, 2023 [Paper] [Code] -
Universal Self-Consistency for Large Language Model Generation
Google, arXiv, 2023 [Paper] -
Enhancing Large Language Models in Coding Through Multi-Perspective Self-Consistency
PKU, ACL, 2023 [Paper] [Code] -
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Google, arXiv, 2023 [Paper] -
Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
Stanford, arXiv, 2023 [Paper] [Code] -
Making Language Models Better Reasoners with Step-Aware Verifier
PKU, ACL, 2023 [Paper] [Code] -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Google, NeuIPS, 2022 [Paper] -
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Washington, EMNLP, 2022 [Paper] [Code]
-
Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision
Fudan, arXiv, 2024 [Paper] [Code] -
SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
Purdue, EMNLP, 2024 [Paper] [Code] -
Small Language Models Need Strong Verifiers to Self-Correct Reasoning
UMich, ACL, 2024 [Paper] [Code] -
Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models
TUDa, arXiv, 2024 [Paper] [Code] -
Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B
Fudan, arXiv, 2024 [Paper] [Code] -
Teaching Language Models to Self-Improve by Learning from Language Feedback
NEU, ACL, 2024 [Paper] -
Large Language Models Can Self-Improve At Web Agent Tasks
UPenn, arXiv, 2024 [Paper] -
Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Tencent, arXiv, 2024 [Paper] -
Can LLMs Learn from Previous Mistakes? Investigating LLMsā Errors to Boost for Reasoning
UCSD, ACL, 2024 [Paper] [Code] -
Fine-Grained Self-Endorsement Improves Factuality and Reasoning
XMU, ACL, 2024 [Paper] -
Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning
KCL, ACL, 2024 [Paper] [Code] -
Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation
CUHK, ACL, 2024 [Paper] [Code] -
Self-Rewarding Language Models
Meta, arXiv, 2024 [Paper] -
Learning From Mistakes Makes LLM Better Reasoner
Microsoft, arXiv, 2024 [Paper] [Code] -
Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision
CMU, NeuIPS, 2023 [Paper] [Code] -
Large Language Models Can Self-Improve
Illinois, EMNLP, 2023 [Paper] -
Improving Logical Consistency in Pre-Trained Language Models using Natural Language Inference
Stanford, Stanford CS224N Custom Project, 2022 [Paper] -
Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference
Stanford, EMNLP, 2022 [Paper] [Code]
-
The Consensus Game: Language Model Generation via Equilibrium Search
MIT, ICLR, 2024 [Paper] -
Improving Factuality and Reasoning in Language Models through Multiagent Debate
MIT, ICML, 2024 [Paper] [Code] -
Scaling Large-Language-Model-based Multi-Agent Collaboration
THU, arXiv, 2024 [Paper] [Code] -
AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning
ZJU, ACL, 2024 [Paper] [Code] -
ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs
UNC, ACL, 2024 [Paper] [Code] -
REFINER: Reasoning Feedback on Intermediate Representations
EPFL, EACL, 2024 [Paper] [Code] -
Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate
HIT, EMNLP, 2023 [Paper] [Code] -
Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs
SYSU, arXiv, 2023 [Paper]
Improving factual accuracy in open-ended generation and reducing hallucinations through Self-Feedback strategies.
-
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
ETH, ICLR, 2024 [Paper] [Code] -
Mitigating Entity-Level Hallucination in Large Language Models
THU, arXiv, 2024 [Paper] [Code] -
Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning
HKUST, arXiv, 2024 [Paper] [Code] -
Fine-grained Hallucination Detection and Editing for Language Models
UoW, arXiv, 2024 [Paper] [Code] -
EVER: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
UNC, arXiv, 2023 [Paper] -
Chain-of-Verification Reduces Hallucination in Large Language Models
Meta, arXiv, 2023 [Paper] -
PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
UCI, arXiv, 2023 [Paper] -
RARR: Researching and Revising What Language Models Say, Using Language Models
CMU, ACL, 2023 [Paper] [Code]
-
An Evolutionary Large Language Model for Hallucination Mitigation
Salah Boubnider University, arXiv, 2024 [Paper] -
From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging
SJTU, arXiv, 2024 [Paper] [Code] -
Teaching Large Language Models to Self-Debug
Google, ICLR, 2024 [Paper] -
LLMs can learn self-restraint through iterative self-reflection
ServiceNow, arXiv, 2024 [Paper] -
Reflexion: Language Agents with Verbal Reinforcement Learning
Northeastern, NeuIPS, 2023 [Paper] [Code] -
Generating Sequences by Learning to Self-Correct
AI2, ICLR, 2023 [Paper] -
MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models
UCSB, EMNLP, 2023 [Paper] [Code] -
Self-Refine: Iterative Refinement with Self-Feedback
CMU, NeuIPS, 2023 [Paper] [Code] -
PEER: A Collaborative Language Model
Meta, ICLR, 2023 [Paper] -
Re3: Generating Longer Stories With Recursive Reprompting and Revision
Berkeley, EMNLP, 2023 [Paper] [Code]
-
Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning
BUAA, AAAI, 2024 [Paper] [Code] -
Look Within, Why LLMs Hallucinate: A Causal Perspective
NUDT, arXiv, 2024 [Paper] -
Retrieval Head Mechanistically Explains Long-Context Factuality
PKU, arXiv, 2024 [Paper] [Code] -
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
ICT, ACL, 2024 [Paper] [Code] -
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
Harvard, NeuIPS, 2023 [Paper] [Code] -
Fine-tuning Language Models for Factuality
Stanford, arXiv, 2023 [Paper]
-
Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM's Reasoning Capability
THU, arXiv, 2024 [Paper] -
Diver: Large Language Model Decoding with Span-Level Mutual Information Verification
IA, arXiv, 2024 [Paper] -
SED: Self-Evaluation Decoding Enhances Large Language Models for Better Generation
FDU, arXiv, 2024 [Paper] -
Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
Edin, arXiv, 2024 [Paper] [Code] -
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
MIT, ICLR, 2024 [Paper] [Code] -
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
UoW, arXiv, 2023 [Paper] -
Contrastive Decoding: Open-ended Text Generation as Optimization
Stanford, ACL, 2023 [Paper] [Code]
In addition to tasks aimed at improving consistency (enhancing reasoning and alleviating hallucinations), there are other tasks that also utilize Self-Feedback strategies.
-
Language Imbalance Driven Rewarding for Multilingual Self-improving
UCAS, arXiv, 2024 [Paper] [Code] -
Aligning Large Language Models via Self-Steering Optimization
ISCAS, arXiv, 2024 [Paper] [Code] -
Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge
Meta FAIR, arXiv, 2024 [Paper] -
Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
FDU, arXiv, 2024 [Paper] [Code] -
Aligning Large Language Models with Self-generated Preference Data
KAIST, arXiv, 2024 [Paper] -
Self-Alignment of Large Language Models via Monopolylogue-based Social Scene Simulation
SJTU, PMLR, 2024 [Paper] [Code] -
Self-Improving Robust Preference Optimization
Cohere, arXiv, 2024 [Paper] -
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
UCLA, PMLR, 2024 [Paper] [Code] -
Self-Play Preference Optimization for Language Model Alignment
UCLA, arXiv, 2024 [Paper] [Code] -
ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline
Zhipu, arXiv, 2024 [Paper] [Code] -
SALMON: Self-Alignment with Instructable Reward Models
IBM, ICLR, 2024 [Paper] [Code] -
Self-Specialization: Uncovering Latent Expertise within Large Language Models
GT, ACL, 2024 [Paper] -
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
PKU, NeurIPS, 2023 [Paper] [Code] -
Safe RLHF: Safe Reinforcement Learning from Human Feedback
PKU, arXiv, 2023 [Paper] [Code] -
Aligning Large Language Models through Synthetic Feedback
NAVER, arXiv, 2023 [Paper] [Code] -
OpenAssistant Conversations -- Democratizing Large Language Model Alignment
Unaffiliated, arXiv, 2023 [Paper] [Code] -
The Capacity for Moral Self-Correction in Large Language Models
Anthropic, arXiv, 2023 [Paper] -
Constitutional AI: Harmlessness from AI Feedback
Anthropic, arXiv, 2022 [Paper] [Code] -
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Anthropic, arXiv, 2022 [Paper] [Code]
-
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
RUC, ICLR, 2024 [Paper] [Code] -
On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes
Google, ICLR, 2024 [Paper] -
Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
Idiap, arXiv, 2024 [Paper] -
Personalized Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation
NTU, EMNLP, 2023 [Paper] [Code] -
SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation
KAIST, Blog, 2023 [Paper] -
Reinforced Self-Training (ReST) for Language Modeling
Google, arXiv, 2023 [Paper] -
Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Washington, arXiv, 2023 [Paper] -
Self-Knowledge Distillation with Progressive Refinement of Targets
LG, ICCV, 2021 [Paper] [Code] -
Revisiting Knowledge Distillation via Label Smoothing Regularization
NUS, CVPR, 2020 [Paper] -
Self-Knowledge Distillation in Natural Language Processing
Handong, RANLP, 2019 [Paper]
-
Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching
CUHK, arXiv, 2024 [Paper] -
Self-Evolving GPT: A Lifelong Autonomous Experiential Learner
HIT, ACL, 2024 [Paper] [Code]
-
Self-Taught Evaluators
Meta, arXiv, 2024 [Paper] -
Self-Instruct: Aligning Language Models with Self-Generated Instructions
Washington, ACL, 2023 [Paper] [Code] -
Self-training Improves Pre-training for Natural Language Understanding
Facebook, arXiv, 2020 [Paper]
- Improving the Robustness of Large Language Models via Consistency Alignment
SDU, LREC-COLING, 2024 [Paper]
- Can Large Language Models Play Games? A Case Study of A Self-Play Approach
Northwestern, arXiv, 2024 [Paper]
- ULTRA: Unleash LLMs' Potential for Event Argument Extraction through Hierarchical Modeling and Pair-wise Refinement
UMich, ACL, 2024 [Paper]
- Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding
ZJU, ACL, 2024 [Paper] [Code]
- TasTe: Teaching Large Language Models to Translate through Self-Reflection
HIT, ACL, 2024 [Paper] [Code]
- Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback
Edin, arXiv, 2023 [Paper] [Code]
- Improving Retrieval Augmented Language Model with Self-Reasoning
Baidu, arXiv, 2024 [Paper]
- Text Classification Using Label Names Only: A Language Model Self-Training Approach
Illinois, EMNLP, 2020 [Paper] [Code]
- Explorations of Self-Repair in Language Models
UTexas, PMLR, 2024 [Paper]
Some common evaluation benchmarks.
-
Evaluating Consistencies in LLM responses through a Semantic Clustering of Question Answering
Dongguk, IJCAI, 2024 [Paper] -
Can Large Language Models Always Solve Easy Problems if They Can Solve Harder Ones?
PKU, arXiv, 2024 [Paper] [Code] -
Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models
RUG, EMNLP, 2023 [Paper] [Code] -
Predicting Question-Answering Performance of Large Language Models through Semantic Consistency
IBM, GEM, 2023 [Paper] [Code] -
BECEL: Benchmark for Consistency Evaluation of Language Models
Oxford, Coling, 2022 [Paper] [Code] -
Measuring and Improving Consistency in Pretrained Language Models
BIU, TACL, 2021 [Paper] [Code]
-
Can I understand what I create? Self-Knowledge Evaluation of Large Language Models
THU, arXiv, 2024 [Paper] -
Can AI Assistants Know What They Don't Know?
Fudan, arXiv, 2024 [Paper] [Code] -
Do Large Language Models Know What They Donāt Know?
Fudan, ACL, 2023 [Paper] [Code]
-
UBENCH: Benchmarking Uncertainty in Large Language Models with Multiple Choice Questions
Nankai, arXiv, 2024 [Paper] [Code] -
Benchmarking LLMs via Uncertainty Quantification
Tencent, arXiv, 2024 [Paper] [Code]
Some theoretical research on Internal Consistency and Self-Feedback strategies.
-
Think-to-Talk or Talk-to-Think? When LLMs Come Up with an Answer in Multi-Step Reasoning
Tohoku, arXiv, 2024 [Paper] -
AI models collapse when trained on recursively generated data
Oxford, Nature, 2024 [Paper] -
A Theoretical Understanding of Self-Correction through In-context Alignment
MIT, ICML, 2024 [Paper] -
Large Language Models Cannot Self-Correct Reasoning Yet
Google, ICLR, 2024 [Paper] -
LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations
Technion, arXiv, 2024 [Paper] [Code] -
When Can Transformers Count to n?
NYU, arXiv, 2024 [Paper] -
Large Language Models as Reliable Knowledge Bases?
UoE, arXiv, 2024 [Paper] -
States Hidden in Hidden States: LLMs Emerge Discrete State Representations Implicitly
THU, arXiv, 2024 [Paper] -
Large Language Models have Intrinsic Self-Correction Ability
UB, arXiv, 2024 [Paper] -
What Did I Do Wrong? Quantifying LLMs' Sensitivity and Consistency to Prompt Engineering
NECLab, arXiv, 2024 [Paper] [Code] -
Large Language Models Must Be Taught to Know What They Don't Know
NYU, arXiv, 2024 [Paper] [Code] -
Are LLMs classical or nonmonotonic reasoners? Lessons from generics
UvA, ACL, 2024 [Paper] [Code] -
On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and Latent Concept
MSU, arXiv, 2024 [Paper] -
Calibrating Reasoning in Language Models with Internal Consistency
SJTU, arXiv, 2024 [Paper] -
Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words?
TAU, arXiv, 2024 [Paper] -
Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization
OSU, arXiv, 2024 [Paper] [Code] -
SELF-[IN]CORRECT: LLMs Struggle with Refining Self-Generated Responses
JHU, arXiv, 2024 [Paper] -
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models
RUC, ACL, 2024 [Paper] [Code] -
Do Large Language Models Latently Perform Multi-Hop Reasoning?
TAU, arXiv, 2024 [Paper] -
Pride and Prejudice: LLM Amplifies Self-Bias in Self-Refinement
UCSB, ACL, 2024 [Paper] [Code] -
The Impact of Reasoning Step Length on Large Language Models
Rutgers, ACL, 2024 [Paper] [Code] -
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
ASU, NeurIPS, 2023 [Paper] -
GPT-4 Doesnāt Know Itās Wrong: An Analysis of Iterative Prompting for Reasoning Problems
ASU, NeurIPS, 2023 [Paper] -
Lost in the Middle: How Language Models Use Long Contexts
Stanford, TACL, 2023 [Paper] -
How Language Model Hallucinations Can Snowball
NYU, arXiv, 2023 [Paper] [Code] -
On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence
UCB, FITEE, 2022 [Paper] -
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Washington, FAccT, 2021 [Paper] -
How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering
CMU, TACL, 2021 [Paper] [Code] -
Language Models as Knowledge Bases?
Facebook, EMNLP, 2019 [Paper] [Code]
@article{liang2024internal,
title={Internal consistency and self-feedback in large language models: A survey},
author={Liang, Xun and Song, Shichao and Zheng, Zifan and Wang, Hanyu and Yu, Qingchen and Li, Xunkai and Li, Rong-Hua and Wang, Yi and Wang, Zhonghao and Xiong, Feiyu and Li, Zhiyu},
journal={arXiv preprint arXiv:2407.14507},
year={2024}
}