2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

融合实体和上下文信息的篇章关系抽取研究

黄河燕 袁长森 冯冲

黄河燕, 袁长森, 冯冲. 融合实体和上下文信息的篇章关系抽取研究. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c220966
引用本文: 黄河燕, 袁长森, 冯冲. 融合实体和上下文信息的篇章关系抽取研究. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c220966
Huang He-Yan, Yuan Chang-Sen, Feng Chong. Document-level relation extraction with entity and context information. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c220966
Citation: Huang He-Yan, Yuan Chang-Sen, Feng Chong. Document-level relation extraction with entity and context information. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c220966

融合实体和上下文信息的篇章关系抽取研究

doi: 10.16383/j.aas.c220966
详细信息
    作者简介:

    黄河燕:北京理工大学计算机学院教授. 主要研究方向为语言信息智能化处理, 社交网络, 文本大数据分析和云计算. E-mail: hhy63@bit.edu.cn

    袁长森:北京理工大学计算机学院博士研究生. 主要研究方向为知识图谱和信息抽取. 本文通信作者. E-mail: yuanchangsen@bit.edu.cn

    冯冲:北京理工大学计算机学院教授. 主要研究方向为机器翻译, 信息抽取和信息检索. E-mail: fengchong@bit.edu.cn

Document-level Relation Extraction With Entity and Context Information

More Information
    Author Bio:

    Huang He-Yan Professor at the School of Computer Science and Technology, Beijing Institute of Technology. Her research interest covers intelligent processing of language information, social network, data analysis, and cloud computing

    YUAN Chang-Sen Ph. D. candidate at the School of Computer Science and Technology, Beijing Institute of Technology. His research interest covers knowledge graph and information extraction. Corresponding author of this paper

    FENG Chong Professor at the School of Computer Science and Technology, Beijing Institute of Technology. His research interest covers machine learning, information extraction, and information retrieval

  • 摘要: 篇章关系抽取是识别篇章中实体对之间的关系. 相较于传统的句子级别关系抽取, 篇章级别关系抽取任务更加贴近实际应用, 但是它对实体对的跨句子推理和上下文信息感知等问题提出了新的挑战. 本文提出融合实体和上下文信息(Fuse entity and context information, FECI)的篇章关系抽取方法, 它包含两个模块, 分别是实体信息抽取模块和上下文信息抽取模块. 实体信息抽取模块从两个实体中自动地抽取出能够表示实体对关系的特征. 上下文信息抽取模块根据实体对的提及位置信息, 从篇章中抽取不同的上下文关系特征. 本文在三个篇章级别的关系抽取数据集上进行实验, 效果得到显著地提升.
  • 图  1  篇章级别关系抽取数据集中的一个示例

    Fig.  1  An example of document-level relation extraction from the DocRED

    图  2  模型框架图主要有两个部分, 分别是实体信息抽取模块和上下文信息抽取模块

    Fig.  2  Architecture of the proposed model, which contains two parts: Entity information extraction module and context information extraction module

    图  3  案例分析, 篇章关系抽取开发集中的一个实例

    Fig.  3  An example of DocRED on development set for case study

    表  1  数据集的统计

    Table  1  Statistics of the datasets in experiments

    统计DocREDCDRGDA
    训练集305350023353
    开发集10005005839
    测试集10005001000
    关系种类9722
    每篇的关系数量19.57.65.4
    下载: 导出CSV

    表  2  模型的超参数

    Table  2  Hyper-parameters of model

    参数名字DocREDCDRGDA
    批次大小444
    迭代次数303010
    学习率 (编码)$5\times e^{-5}$$5\times e^{-5}$$5\times e^{-5}$
    学习率 (分类)$1\times e^{-4}$$1\times e^{-4}$$1\times e^{-4}$
    分组大小646464
    Dropout0.10.10.1
    梯度裁剪1.01.01.0
    下载: 导出CSV

    表  3  在DocRED数据集上开发集和测试集的实验结果

    Table  3  Main results on the development and test sets of DocRED

    ModelDevelopmentTest
    Ign F1 ($\%$)F1 ($\%$)Ign F1 ($\%$)F1 ($\%$)
    CNN41.5843.4540.3342.26
    LSTM48.4450.6847.7150.07
    Bi-LSTM48.8750.9448.7851.06
    Context-Aware48.9451.0948.4050.70
    HIN-GloVe51.0652.9551.1553.30
    GAT-GloVe45.1751.4447.3649.51
    GCNN-GloVe46.2251.5249.5951.62
    EoG-GloVe45.9452.1549.4851.82
    AGGCN-GloVe46.2952.4748.8951.45
    LSR-GloVe48.8255.1752.1554.18
    BERT-REBASE54.1653.20
    RoBERTaBASE53.8556.0553.5255.77
    BERT-TWO-StepBASE54.4253.92
    HIN-BERTBASE54.2956.3153.7055.60
    CorefBERTBASE55.3257.5154.5456.96
    LSR-BERTBASE52.4359.0056.9759.05
    BERT-EBASE56.5158.52
    GAINBASE59.1461.2259.0061.24
    FECIBASE (Our model)59.7461.3859.8161.22
    下载: 导出CSV

    表  4  在CDR和GDA数据集上测试集F1值

    Table  4  F1 of test set on CDR and GDA datasets

    ModelCDRGDA
    BRAN62.1
    CNN62.3
    EoG63.681.5
    LSR-BERT64.882.2
    SciBERTBASE65.182.5
    SciBERT-EBASE65.983.3
    FECIBASE (Our model)69.283.7
    下载: 导出CSV

    表  5  FECIBASE模型在开发集上的消融研究结果

    Table  5  Ablation study of FECIBASE on development set

    ModelDevelopment
    Ign F1 (%)F1 (%)P (M)T (s)
    FECIBASE59.7461.38133.42962.4
    w/o Entity58.1660.07132.22831.7
    w/o Context58.6760.89130.5482.3
    下载: 导出CSV

    表  6  FECIBASE模型在开发集上噪声实体和噪声上下文的实验结果

    Table  6  The results of noisy entity and noisy context of FECIBASE on development set

    ModelDevelopment
    Ign F1F1
    FECIBASE59.7461.38
    Head entity58.4260.14
    Tail entity57.9760.08
    Entity pair58.9160.85
    Tradition57.4259.72
    Co-occurrence58.2761.01
    Non co-occurrence56.7258.86
    下载: 导出CSV

    表  7  FECIBASE模型在开发集上不同上下文信息的实验结果

    Table  7  The results of different contexts of FECIBASE on development set

    ModelDevelopment
    Ign F1F1
    FECIBASE59.7461.38
    Random58.4760.61
    Mean59.5660.94
    Tradition58.1960.06
    下载: 导出CSV

    表  8  不同方法在开发集上的效率

    Table  8  Efficiency of different methods on development set

    ModelDevelopment
    P (M)Train T (s)Decoder T(s)
    LSR-BERTBASE112.1282.938.8
    GAINBASE217.02271.6817.2
    FECIBASE133.42962.4829.0
    下载: 导出CSV
  • [1] Yu M, Yin W P, Hasan K S, Santos C D, Xiang B, Zhou B W. Improved neural relation detection for knowledge base question answering. ArXiv preprint arXiv: 1704.06194, 2017.
    [2] Chen Z Y, Chang C H, Chen Y P, Nayak J, Ku L W. UHop: An unrestricted-hop relation extraction framework for knowledge-based question answering. ArXiv preprint arXiv: 1904.01246, 2019.
    [3] Yu H Z, Li H S, Mao D H, and Cai Q. A relationship extraction method for domain knowledge graph construction. World Wide Web, 2020, No.23(2): 735—753. doi: 10.1007/s11280-019-00765-y
    [4] Ristoski P, Gentile A L, Alba A, Gruhl D, Welch S. Large-scale relation extraction from web documents and knowledge graphs with human-in-the-loop. Journal of Web Semant, 2020, No.60: 100546. doi: 10.1016/j.websem.2019.100546
    [5] Macdonald E and Barbosa D. Neural relation extraction on Wikipedia tables for augmenting knowledge graphs. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management. New York, USA: ACM, 2020. 2133–2136.
    [6] Mintz M, Bills S, Snow R, Jurafsky D. Distant supervision for relation extraction without labeled data. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Suntec, Singapore: ACL, 2009. 1003–1011.
    [7] Lin Y K, Shen S Q, Liu Z Y, Luan H B, Sun M S. Neural relation extraction with selective attention over instances. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin, Germany: ACL, 2016. 2124–2133.
    [8] Miwa M, Bansal M. End-to-end relation extraction using LSTMs on sequences and tree structures. ArXiv preprint arXiv: 1601.00770, 2016.
    [9] Zhang Y H, Qi P, Manning C D. Graph convolution over pruned dependency trees improves relation extraction. ArXiv preprint arXiv: 1809.10185, 2018.
    [10] Guo Z J, Zhang Y, Lu W. Attention guided graph convolutional networks for relation extraction. ArXiv preprint arXiv: 1906.07510, 2019.
    [11] Yao Y, Ye D M, Li P, Han X, Lin Y K, Liu Z H, et al. DocRED: A large-scale document-level relation extraction dataset. ArXiv preprint arXiv: 1906.06127, 2019.
    [12] Zhou W X, Huang K, Ma T Y, Huang J. Document-level relation extraction with adaptive thresholding and localized context pooling. ArXiv preprint arXiv: 2010.11304, 2020.
    [13] Zeng S, Xu R X, Chang B B, Li L. Double graph based reasoning for document-level relation extraction. ArXiv preprint arXiv: 2009.13752, 2020.
    [14] Santos C N D, Xiang B, Zhou B W. Classifying relations by ranking with convolutional neural networks. ArXiv preprint arXiv: 1504.06580, 2015.
    [15] Cho K, Merrienboer B V, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. ArXiv preprint arXiv: 1406.1078, 2014.
    [16] Liu Y, Wei F R, Li S J, Ji H, Zhou M, Wang H F. A dependency-based neural network for relation classification. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers). Beijing, China: ACL, 2015. 285–290.
    [17] Christopoulou F, Miwa M, Ananiadou S. A walk-based model on entity graphs for relation extraction. ArXiv preprint arXiv: 1902.07023, 2019.
    [18] Christopoulou F, Miwa M, Ananiadou S. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. ArXiv preprint arXiv: 1909.00228, 2019.
    [19] Yang B S, Mitchell T. Joint extraction of events and entities within a document context. ArXiv preprint arXiv: 1609.03632, 2016.
    [20] Swampillai K, Stevenson M. Extracting relations within and across sentences. In: Proceedings of the Recent Advances in Natural Language Processing. Hissar, Bulgaria: DBLP, 2011. 25–32.
    [21] Jia R, Wong C, Poon H. Document-level n-ary relation extraction with multiscale representation learning. ArXiv preprint arXiv: 1904.02347, 2019.
    [22] Verga P, Strubell E, McCallum A. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. ArXiv preprint arXiv: 1802.10569, 2018.
    [23] Nan G S, Guo Z J, Sekulic I, Lu W. Reasoning with latent structure refinement for document-level relation extraction. ArXiv preprint arXiv: 2005.06312, 2020.
    [24] Wang D F, Hu W, Cao E, Sun W J. Global-to-local neural networks for document-level relation extraction. ArXiv preprint arXiv: 2009.10359, 2020.
    [25] Devlin J, Chang M W, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. ArXiv preprint arXiv: 1810.04805, 2019.
    [26] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, California, USA: Curran Associates Inc., 2017. 6000–6010.
    [27] Sennrich R, Haddow B, Birch A. Neural machine translation of rare words with subword units. ArXiv preprint arXiv: 1508.07909, 2016.
    [28] Li J, Sun Y P, Johnson R J, Sciaky D, Wei C H, Leaman R, et al. BioCreative V CDR task corpus: A resource for chemical disease relation extraction. The Journal of Biological Databases and Curation.
    [29] Wu Y, Luo R B, Leung H C M, Ting H, Lam T. RENET: A deep learning approach for extracting gene-disease associations from literature. In: Proceedings of the International Conference on Research in Computational Molecular Biology. Washington, USA: Springer, 2019. 272–284.
    [30] Liu Y H, Ott M, Goyal N, Du J F, Joshi M, Chen D Q, et al. RoBERTa: A robustly optimized BERT pretraining approach. ArXiv preprint arXiv: 1907.11692, 2019.
    [31] Beltagy I, Lo K, Cohan A. SciBERT: A pretrained language model for scientific text. ArXiv preprint arXiv: 1903.10676, 2019.
    [32] Micikevicius P, Narang S, Alben J, Diamos G, Elsen E, Garca D, et al. Mixed precision training. ArXiv preprint arXiv: 1710.03740, 2018.
    [33] Loshchilov I, Hutter F. Decoupled weight decay regularization. ArXiv preprint arXiv: 1711.05101, 2019.
    [34] Velickovic P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph Attention Networks. ArXiv preprint arXiv: 1710.10903, 2018.
    [35] Verga P, Strubell E, McCallum A. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. ArXiv preprint arXiv: 1802.10569, 2018.
    [36] Wang H, Focke C, Sylvester R, Mishra N, Wang W. Fine-tune BERT for DocRED with two-step process. ArXiv preprint arXiv: 1906.04684, 2019.
    [37] Tang H Z, Cao Y N, Zhang Z Y, Cao J X, Fang F, Wang S, et al. HIN: Hierarchical inference network for document-level relation extraction. ArXiv preprint arXiv: 2003.12754, 2020.
    [38] Pennington J, Socher R, Manning C D. GloVe: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
    [39] Nan G S, Guo Z J, Sekulic I, Lu W. Reasoning with latent structure refinement for document-level relation extraction. ArXiv preprint arXiv: 2005.06312, 2020.
    [40] Ye D M, Lin Y K, Du J J, Liu Z H, Sun M S, Liu Z Y. Coreferential reasoning learning for language representation. ArXiv preprint arXiv: 2004.06870, 2020.
    [41] Nguyen D Q, Verspoor K. Convolutional neural networks for chemical-disease relation extraction are improved with character-based word embeddings. ArXiv preprint arXiv: 2004.06870, 2020.
  • 加载中
计量
  • 文章访问数:  251
  • HTML全文浏览量:  98
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-12-12
  • 录用日期:  2023-03-29
  • 网络出版日期:  2023-08-28

目录

    /

    返回文章
    返回