Skip to content

Latest commit

 

History

History
101 lines (79 loc) · 5.4 KB

README.md

File metadata and controls

101 lines (79 loc) · 5.4 KB

Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models

knights and knaves

Generated by DALL·E 3


Knights and knaves problems represent a classic genre of logical puzzles where characters either tell the truth or lie. The objective is to logically deduce each character's identity based on their statements. The challenge arises from the truth-telling or lying behavior, which influences the logical implications of each statement. Solving these puzzles requires not only direct deductions from individual statements, but the ability to assess the truthfulness of statements by reasoning through various hypothetical scenarios. As such, knights and knaves puzzles serve as compelling examples of suppositional reasoning. In this paper, we introduce \emph{TruthQuest}, a benchmark for suppositional reasoning based on the principles of knights and knaves puzzles. Our benchmark presents problems of varying complexity, considering both the number of characters and the types of logical statements involved. Evaluations on \emph{TruthQuest} show that large language models like Llama 3 and Mixtral-8x7B exhibit significant difficulties solving these tasks. A detailed error analysis of the models' output reveals that lower-performing models exhibit a diverse range of reasoning errors, frequently failing to grasp the concept of truth and lies. In comparison, more proficient models primarily struggle with accurately inferring the logical implications of potentially false statements.

Table of Contents

Setup

All code was developed and tested on Ubuntu 22.04 with Python 3.11

To run the current code, we recommend to use Poetry:

poetry install                          # Install dependencies
poetry shell                            # Activate virtual environment
# Work for a while
deactivate

Please make sure to configure your HuggingFace credentials to download respective models.

Generate Puzzles

To generate puzzles, run the following command:

python gen_data.py --from-yaml

This will generate the corresponding data. Alternatively, you can fetch the data from here.

Run Models

To run models, run the following command:

python run.py --model <hf-model-name>

In this project, we used the following models:

Note that in order to use a new model, you need to add a configuration file in this folder.

Evaluate Performance

To evaluate the performance of the models, run the following command:

python evaluate_conclusion.py

For specific command-line arguments, please refer to the code.

LLM-Based and Human Annotations

We publish all LLM-based and human annotations in our respective HuggingFace data repository. The TruthQuest dataset can be found here.

Human-Annotated CoT Prompts

We provide up to 8 human-annotated CoT examples for each dataset configuration. Please see this folder for further information.

License

MIT license

This work is licensed under a CC BY-SA 4.0 License.

Citation

If you find our work helpful, you can cite this paper as:

@inproceedings{mondorf-plank-2024-liar,
    title = "Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models",
    author = "Mondorf, Philipp  and Plank, Barbara",
    editor = "Al-Onaizan, Yaser  and Bansal, Mohit  and Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.404",
    pages = "7114--7137",
}