Skip to content

ChronoKGE - A knowledge graph embedding framework for time-focused representation learning (RepL4NLP @ ACL 2022)

License

Notifications You must be signed in to change notification settings

gueneumann/ChronoKGE

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ChronoKGE

ChronoKGE is a knowledge graph embedding framework to ease time-focused research in representation learning.

alt text


Requirements

Python 3.6+ https://www.python.org/downloads/

PyTorch 1.8+ https://pytorch.org/get-started/locally/

Optuna 2.9+ https://optuna.readthedocs.io/en/latest/installation.html

Selenium 3.14+ https://selenium-python.readthedocs.io/installation.html


Installation

Install dependencies

pip3 install -r requirements.txt

Usage

Run

run Executes the experiment with the given parameters.

Via Command Line:
python3 -m chrono_kge [<model-args>] run [<run-args>]

For more details, see model arguments and run arguments.

Via YAML Config:
python3 -m chrono_kge -mc <main_config> run -rc <run_config>

Tune

tune Performs parameter tuning with the provided amount of trials.

Via Command Line:
python3 -m chrono_kge [<model-args>] tune [<tune-args>]

For more details, see model arguments and tune arguments.

Via YAML Config:
python3 -m chrono_kge -mc <main_config> tune -tc <tune_config>

Examples

Example 1

Run with default parameters model=lowfer-tnt, kg=icews14, dim=300, lr=0.01 using YAML.

python3 -m chrono_kge -mc "config/main/default.yaml" run -rc "config/run/default.yaml"

Example 2

Run with default parameters model=lowfer-tnt, kg=icews14, dim=300, lr=0.01 using CMD.

python3 -m chrono_kge -m "t-lowfer" -d "icews14" -am 1 -mm 1 run -lr 0.01 -ed 300

Optional arguments

1. Model arguments

-m MODEL, --model MODEL
Learning model.
Supported models: lowfer, tlowfer.
Default tlowfer.

-d DATASET, --dataset DATASET
Which dataset to use.
Supported datasets: see knowledge graphs below.
Default icews14.

-e EPOCHS, --epochs EPOCHS
Number of total epochs.
Default 1000.

-am AUGMENT_MODE, --aug_mode AUGMENT_MODE
The mode of augmentation.
Supported methods: see augmentation modes below.
Default 0.

-rm REG_MODE, --reg_mode REG_MODE
The mode of regularization.
Supported methods: see regularization modes below.
Default 0.

-mm MODULATION_MODE, --mod_mode MODULATION_MODE
Modulation mode.
Supported modulations: see modulation modes below.
Default 0.

-em ENC_MODE, --enc_mode ENC_MODE
Supported methods: see encoding modes below.
Default 0.

-c, --cuda
Whether to use cuda (GPU) or not (CPU).
Default CPU.

--save
Whether to save results.
Default False.


Augmentation modes

0: None
1: Reverse triples
2: Back translation (pre)
3: Back translation (ad-hoc)
4: Reverse triples + Back translation (pre)


2: Augmentation using precomputed translations.

3: Ad-hoc back translation using free Google Translate service.
High confidence, max. 2 translations, language order ch-zn, es, de, en.
Supported KB: ICEWS14, ICEWS05-15, ICEWS18


Regularization modes

0: None
1: Omega (embedding regularization)
2: Lambda (time regularisation)
3: Omega + Lambda

* Tensor norms: Omega: p=3, Lambda: p=4


Modulation modes

1. Time Modulation (T)

Extends LowFER with dynamic relations.
mode=0

2. Time-no-Time Modulation (TNT)

Extends LowFER with dynamic and static relations.
mode=1


Benchmark results

Results for semantic matching models on ICEWS14 and ICEWS05-15.

ICEWS14

Method MRR H@10 H@3 H@1
DE-DistMult 0.501 0.708 0.569 0.392
DE-SimplE 0.526 0.725 0.592 0.418
TComplEx 0.560 0.730 0.610 0.470
TNTComplEx 0.560 0.740 0.610 0.460
TuckERT 0.594 0.731 0.640 0.518
TuckERTNT 0.604 0.753 0.655 0.521
LowFER-T 0.584 0.734 0.630 0.505
LowFER-TNT 0.586 0.735 0.632 0.507
LowFER-CFB 0.623 0.757 0.671 0.549
LowFER-FTP 0.617 0.765 0.665 0.537

ICEWS05-15

Method MRR H@10 H@3 H@1
DE-DistMult 0.484 0.718 0.546 0.366
DE-SimplE 0.513 0.748 0.578 0.392
TComplEx 0.580 0.760 0.640 0.490
TNTComplEx 0.600 0.780 0.650 0.500
TuckERT 0.627 0.769 0.674 0.550
TuckERTNT 0.638 0.783 0.686 0.559
LowFER-T 0.559 0.714 0.605 0.476
LowFER-TNT 0.562 0.717 0.608 0.480
LowFER-CFB 0.638 0.791 0.690 0.555
LowFER-FTP 0.625 0.792 0.681 0.534

Additional benchmarks

For an exhaustive summary of related benchmark results, visit TKGC Benchmark Results.

Citation

If you find our work useful, please consider citing.

@inproceedings{dikeoulias-etal-2022-temporal,
    title = "Temporal Knowledge Graph Reasoning with Low-rank and Model-agnostic Representations",
    author = {Dikeoulias, Ioannis  and
      Amin, Saadullah  and
      Neumann, G{\"u}nter},
    booktitle = "Proceedings of the 7th Workshop on Representation Learning for NLP",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.repl4nlp-1.12",
    doi = "10.18653/v1/2022.repl4nlp-1.12",
    pages = "111--120",
}

About

ChronoKGE - A knowledge graph embedding framework for time-focused representation learning (RepL4NLP @ ACL 2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%