Skip to content

ufal/factgenie

Repository files navigation

logo

factgenie

GitHub GitHub issues arXiv Code style: black Github stars

Annotate LLM outputs with a lightweight, self-hosted web application 🌈

factgenie

📢 News

  • 25/10/2024 — We are preparing the first official release. Stay tuned!
  • 08/10/2024 — We added step-by-step walkthrougs on using factgenie for generating and annotating outputs for a dataset of basketball reports 🏀
  • 07/10/2024 — We removed the example datasets from the repository. Instead, you can find them in the External Resources section in the Manage data interface.
  • 24/09/2024 — We introduced a brand new factgenie logo!
  • 19/09/2024 — On the Analytics page, you can now see detailed statistics about annotations and compute inter-annotator agreement 📈
  • 16/09/2024 — You can now collect extra inputs from the annotators for each example using sliders and select boxes.
  • 16/09/2024 — We added an option to generate outputs for the inputs with LLMs directly within factgenie! 🦾

👉️ How can factgenie help you?

Outputs from large language models (LLMs) may contain errors: semantic, factual, and lexical.

With factgenie, you can have the error spans annotated:

  • From LLMs through an API.
  • From humans through a crowdsourcing service.

Factgenie can provide you:

  1. A user-friendly website for collecting annotations from human crowdworkers.
  2. API calls for collecting equivalent annotations from LLM-based evaluators.
  3. A visualization interface for visualizing the data and inspecting the annotated outputs.

What does factgenie not help with is collecting the data (we assume that you already have these), starting the crowdsourcing campaign (for that, you need to use a service such as Prolific.com) or running the LLM evaluators (for that, you need a local framework such as Ollama or a proprietary API).

🏃 Quickstart

Make sure you have Python 3 installed (the project is tested with Python 3.10).

After cloning the repository, the following commands install the package and start the web server:

pip install -e .[dev,deploy]
factgenie run --host=127.0.0.1 --port 5000

💡 Usage guide

See the following wiki pages that that will guide you through various use-cases of factgenie:

Topic Description
🔧 Setup How to install factgenie.
🗂️ Data Management How to manage datasets and model outputs.
🤖 LLM Annotations How to annotate outputs using LLMs.
👥 Crowdsourcing Annotations How to annotate outputs using human crowdworkers.
✍️ Generating Outputs How to generate outputs using LLMs.
📊 Analyzing Annotations How to obtain statistics on collected annotations.
🌱 Contributing How to contribute to factgenie.

🔥 Tutorials

We also provide step-by-step walkthroughs showing how to employ factgenie on the the dataset from the Shared Task in Evaluating Semantic Accuracy:

Tutorial Description
🏀 #1: Importing a custom dataset Loading the basketball statistics and model-generated basketball reports into the web interface.
💬 #2: Generating outputs Using Llama 3.1 with Ollama for generating basketball reports.
📊 #3: Customizing data visualization Manually creating a custom dataset class for better data visualization.
🤖 #4: Annotating outputs with an LLM Using GPT-4o for annotating errors in the basketball reports.
👨‍💼 #5: Annotating outputs with human annotators Using human annotators for annotating errors in the basketball reports.

💬 Cite us

Our paper was published at INLG 2024 System Demonstrations!

You can also find the paper on arXiv.

For citing us, please use the following BibTeX entry:

@inproceedings{kasner2024factgenie,
    title = "factgenie: A Framework for Span-based Evaluation of Generated Texts",
    author = "Kasner, Zden{\v{e}}k  and
      Platek, Ondrej  and
      Schmidtova, Patricia  and
      Balloccu, Simone  and
      Dusek, Ondrej",
    editor = "Mahamood, Saad  and
      Minh, Nguyen Le  and
      Ippolito, Daphne",
    booktitle = "Proceedings of the 17th International Natural Language Generation Conference: System Demonstrations",
    year = "2024",
    address = "Tokyo, Japan",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.inlg-demos.5",
    pages = "13--15",
}

Acknowledgements

This work was co-funded by the European Union (ERC, NG-NLG, 101039303).

erc-logo