Skip to content

Latest commit

 

History

History
67 lines (57 loc) · 3.28 KB

README.md

File metadata and controls

67 lines (57 loc) · 3.28 KB

Learning to Walk with Dual Agents for Knowledge Graph Reasoning

Pytorch Implementation for AAAI' 2022 paper: Learning to Walk with Dual Agents for Knowledge Graph Reasoning

Framework Overview

This paper proposed a dual-agent based reinforcement learning approach to tackle the KG reasoning problem. Existing RL-based learning to walk methods rely solely on one entity-level agent to explore large KGs, which works well on finding short reasoning paths, but usually succumb to longer patterns. Hence, we propose to divide large KGs into semantic clusters first, then utilize a cluster-level agent (named Giant) to assist entity-level agent (named Dwarf) and co-explore KG reasoning. To fulfill the purpose, we design a Collaborative Policy Network and Mutual Reinforcement Reward system to train two agents synchronously.

Requirements

pip install -r requirements.txt
  • python 3.8.5
  • scipy 1.6.2
  • tqdm 4.62.3
  • torch 1.7.1
  • numpy 1.19.2

Training & Testing

The hyperparam configs for each experiments are included in the configs directory. To start a particular experiment, just do

sh run.sh configs/${dataset}.sh

where the ${dataset}.sh is the name of the config file. For example,

sh run.sh configs/nell.sh

Output

The code outputs the evaluation of CURL on the datasets provided. The metrics used for evaluation are Hits@{1,3,5,10,20}, MRR, and MAP. Along with this, the code also outputs the answers CURL reached in a file.

Citation

If you find our paper useful or use our code, please kindly cite the paper.

@article{zhang2021learning,
  title={Learning to Walk with Dual Agents for Knowledge Graph Reasoning},
  author={Zhang, Denghui and Yuan, Zixuan and Liu, Hao and Lin, Xiaodong and Xiong, Hui},
  journal={arXiv preprint arXiv:2112.12876},
  year={2021}
}

Acknowledgement