Skip to content

A program for translating the AIDA CoNLL-YAGO dataset to use Wikidata QIDs instead of Wikipedia titles for entity identifiers.

License

Notifications You must be signed in to change notification settings

cyanic-selkie/aida-conll-yago-wikidata

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AIDA CoNLL-YAGO Wikidata

A program for translating the AIDA CoNLL-YAGO dataset to use Wikidata QIDs instead of Wikipedia titles for entity identifiers.

License

Usage

You can find the pregenerated dataset on Huggingface (March 1, 2023).

If you want to regenerate the dataset with fresh Wikipedia/Wikidata mappings, you can build aida-conll-yago-wikidata from source by running the following command:

cargo build --release

aida-conll-yago-wikidata uses the mappings between Wikipedia titles and Wikidata QIDs generated by wiki2qid. Follow the instructions to generate the Apache Avro file containing the mappings first.

For convenience, the original AIDA CoNLL-YAGO dataset is given in data/AIDA-YAGO2-dataset.tsv.

Once you have the necessary mappings, you can generate the dataset with the following command:

cargo run --release -- \
        --input-conll data/AIDA-YAGO2-dataset.tsv \
        --input-wiki2qid "${MAPPINGS_FILE}" \
        --output-dir "${OUTPUT_DIR}"

This will create 3 files named train.parquet, validation.parquet, and test.parquet in the directory specified by ${OUTPUT_DIR}.

The outputs are written into zstd compressed Apache Parquet files. You can see the details of the schema on Huggingface.

About

A program for translating the AIDA CoNLL-YAGO dataset to use Wikidata QIDs instead of Wikipedia titles for entity identifiers.

Topics

Resources

License

Stars

Watchers

Forks

Languages