KGE Editing

Language Model-based Knowledge Graph Embedding Editing

What is KGE Editing?

The purpose of the KGE Editing task is to modify the erroneous knowledge in the KGE model and to inject new knowledge into the KGE model. Thus, in response to the task objectives, we design two subtasks (EDIT & ADD). For the EDIT sub-task, we edit the wrong fact knowledge that is stored in the KG embeddings. Also, for the ADD sub-task, we add brand-new knowledge into the model without re-training the whole model.


KGEditor Knowledge Graph Embeddings Editor (KGEditor) is a strong baseline for editing the knowledge in KGE model. KGEditor utilizes additional parameters via the hyper networks. We construct an additional layer with the same architecture of FFN and leverage its parameters for knowledge editing.

More About KGEditor

Dataset We build four datasets for the sub-task of EDIT and ADD based on two benchmark datasets FB15k-237, and WN18RR. Firstly, we train KG embedding models with language models. For EDIT sub-task, we sample some hard triples as candidates. For the ADD sub-task, we leverage the original training set of FB15k-237 and WN18RR to build the pre-train dataset (original pre-train data) and use the data from the standard inductive setting as they are not seen before. For more tails about the task and the datasets, please refer to our paper:

KGEditor paper (Siyuan Cheng et al. '22)

Getting Started

We release all the datasets and the pre-trained KGE model for the community.

Take E-FB15k237 as an example, to evaluate your models, you should load the pre-trained KGE model contain the erroneous knowledge. While testing the edit test dataset, you need to verify that the stable dataset after the model data parameters are changed.

Have Questions?

If you have any question about the KGE Editing task and the datasets, or you want to submit your results to update the leaderboard, feel free to emtail to us:



If you use or extend our work, please cite the paper as follows:


@article{DBLP:journals/corr/abs-2301-10405,
    author    = {Siyuan Cheng and Ningyu Zhang and Bozhong Tian and Zelin Dai and Feiyu Xiong and Wei Guo and Huajun Chen},
    title     = {Editing Language Model-based Knowledge Graph Embeddings},
    journal   = {CoRR},
    volume    = {abs/2301.10405},
    year      = {2023},
    eprinttype = {arXiv},
    eprint    = {2301.10405},
}
Rank Model Succ@1 Succ@3 ERroc RK@3 RKroc

Finetune

0.472 0.746 0.998 0.543 0.977

1

Jan 19, 2022
KGEditor

Arxiv'23

0.866 0.986 0.999 0.874 0.635

2

Jan 19, 2022
Knowledge Editor

EMNLP'21

0.702 0.969 0.999 0.912 0.685

3

Jan 19, 2022
MEND

ICLR'22

0.828 0.950 0.954 0.750 0.993

4

Jan 19, 2022
CALINET

EMNLP'22

0.328 0.348 0.937 0.353 0.997

5

Jan 19, 2022
K-Adapter

ACL'21

0.329 0.348 0.926 0.001 0.999

Zero-Shot Learning

0.000 0.000 - 1.000 0.000