2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

多尺度视觉语义增强的多模态命名实体识别方法

王海荣 徐玺 王彤 陈芳萍

王海荣, 徐玺, 王彤, 陈芳萍. 多尺度视觉语义增强的多模态命名实体识别方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c230573
引用本文: 王海荣, 徐玺, 王彤, 陈芳萍. 多尺度视觉语义增强的多模态命名实体识别方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c230573
Wang Hai-Rong, Xu Xi, Wang Tong, Chen Fang-Ping. Multi-scale visual semantic enhancement for multi-modal named entity recognition method. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c230573
Citation: Wang Hai-Rong, Xu Xi, Wang Tong, Chen Fang-Ping. Multi-scale visual semantic enhancement for multi-modal named entity recognition method. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c230573

多尺度视觉语义增强的多模态命名实体识别方法

doi: 10.16383/j.aas.c230573
基金项目: 宁夏自然科学基金项目(2023AAC03316), 宁夏回族自治区教育厅高等学校科学研究重点项目 (NYG2022051)资助
详细信息
    作者简介:

    王海荣:北方民族大学教授, 2015年获得东北大学博士学位. 主要研究方向为大数据知识工程与智能信息处理. 本文通信作者. E-mail: wanghr@nun.edu.cn

    徐玺:北方民族大学计算机科学与工程学院硕士研究生. 主要研究方向为多模态信息抽取. E-mail: 20217403@stu.nmu.edu.cn

    王彤:北方民族大学计算机科学与工程学院硕士研究生. 主要研究方向为多模态信息抽取. E-mail: is_wangtong@163.com

    陈芳萍:北方民族大学计算机科学与工程学院硕士研究生. 主要研究方向为多模态信息抽取. E-mail: 17393213357@163.com

Multi-Scale Visual Semantic Enhancement for Multi-Modal Named Entity Recognition Method

Funds: Supported by Natural Science Foundation of Ningxia Province(2023AAC03316), and Key research project of Ningxia Education Department (NYG2022051)
More Information
    Author Bio:

    WANG Hai-Rong Professor at North Minzu University. She received her Ph.D. degrees from Northeastern University in 2015. Her research interest covers Big Data Knowledge Engineering and Intelligent Information Processing. Corresponding author of this paper

    XU Xi Master student at the School of Computer Science and Engineering, North Minzu University. His research interest covers Multimodal information extraction

    WANG Tong Master student at the School of Computer Science and Engineering, North Minzu University. Her research interest covers Multimodal information extraction

    CHEN Fang-Ping Master student at the School of Computer Science and Engineering, North Minzu University. Her research interest covers Multimodal information extraction

  • 摘要: 为了解决多模态命名实体识别(Multi-modal named entity recognition, MNER) 方法研究中存在的图像特征语义缺失以及多模态表示语义约束较弱等问题, 提出了多尺度视觉语义增强的多模态命名实体识别方法. 该方法提取多种视觉特征用于补全图像语义; 挖掘文本特征与多种视觉特征间的语义交互关系, 生成多尺度视觉语义特征并进行融合, 得到多尺度视觉语义增强的多模态文本表示; 使用视觉实体分类器对多尺度视觉语义特征解码, 实现视觉特征的语义一致性约束; 调用多任务标签解码器挖掘多模态文本表示和文本特征的细粒度语义, 通过联合解码来解决语义偏差问题, 从而进一步提高命名实体识别准确度. 为验证该方法的有效性, 在Twitter-2015和Twitter-2017数据集上进行实验, 并与HvpNet、MNER-QG和RGCN等10种方法进行对比, 该方法的平均F1值分别提升了0.85%和1.45%.
  • 图  1  MSVSE模型框架

    Fig.  1  The framework of MSVSE model

    图  2  多模态特征融合

    Fig.  2  The multi-modal feature fusion module

    图  3  多任务标签解码器

    Fig.  3  The multi-tasking label decoder

    图  4  在Twitter-2015的视觉实体分类性能比较

    Fig.  4  Performance comparison of visual entity classification on Twitter-2015

    图  5  在Twitter-2017的视觉实体分类性能比较

    Fig.  5  Performance comparison of visual entity classification on Twitter-2017

    表  1  数据集上方法性能比较(%)

    Table  1  Performance comparison of method on Twitter dataset (%)

    方法Twitter-2015Twitter-2017
    PERLOCORGMISCF1PERLOCORGMISCF1
    MSB86.4477.1652.9136.0573.4784.32
    MAF84.6781.1863.3541.8273.4291.5185.8085.1068.7986.25
    UMGF84.2683.1762.4542.4274.8591.9285.2283.1369.8385.51
    M3S86.0581.3262.9741.3675.0392.7384.8182.4969.5386.06
    UMT85.2481.5863.0339.4573.4191.5684.7382.2470.1085.31
    UAMer84.9581.2861.4138.3473.1090.4981.5282.0964.3284.90
    VAE85.8281.5663.2043.6775.0791.9681.8984.1374.0786.37
    MNER-QG85.6881.4263.6241.5374.9493.1786.0284.6471.8387.25
    RGCN86.3682.0860.7841.5675.0092.8686.1084.0572.3887.11
    HvpNet85.7481.7861.9240.8174.3392.2884.8184.3765.2085.80
    MSVSE86.7281.6364.0838.9175.1193.2485.9685.2270.0087.34
    –HvpNet0.98–0.152.16–1.900.780.961.150.854.801.54
    下载: 导出CSV

    表  2  模型结构消融实验(%)

    Table  2  Structural ablation experiments for the model(%)

    方法Twitter-2015Twitter-2017
    PERLOCORGMISCF1PERLOCORGMISCF1
    MSVSE(本文)86.7281.6364.0838.9175.1193.2485.9685.2270.0087.34
    w/o自注意力机制86.4981.2063.2141.5674.8393.0586.5284.3767.3486.79
    w/o相似度86.3381.5963.1540.8474.9192.9486.5984.0768.2486.75
    w/o自注意力机制+相似度86.8081.3863.3239.6274.6792.9785.8784.4167.9686.67
    w/o多任务标签解码器86.4981.7862.6837.6074.6992.9884.8385.0271.6687.14
    w/o视觉实体分类器86.5281.6463.0639.8974.7993.3784.8385.8266.2486.92
    下载: 导出CSV

    表  3  联合编码器中视觉特征消融实验(%)

    Table  3  Visual feature ablation experiments in the joint encoder(%)

    文本视觉标签图像描述Twitter-2015Twitter-2017
    PERLOCORGMISCF1PERLOCORGMISCF1
    $ \checkmark$$ \checkmark$86.7281.6364.0838.9175.1193.2485.9685.2270.0087.34
    $ \checkmark$86.7681.6861.2139.4674.7392.9586.2084.6070.8287.11
    $ \checkmark$$ \checkmark$86.8781.7463.7237.8074.8793.0385.7184.4371.7187.16
    $ \checkmark$$ \checkmark$$ \checkmark$86.5181.8562.2038.3674.7293.7385.9684.6270.9787.38
    下载: 导出CSV

    表  4  多尺度视觉语义前缀中视觉特征消融实验(%)

    Table  4  Visual feature ablation experiments in multi-scale visual semantic prefixes (%)

    区域视觉特征视觉标签图像描述Twitter-2015Twitter-2017
    PERLOCORGMISCF1PERLOCORGMISCF1
    $ \checkmark$$ \checkmark$$ \checkmark$86.7281.6364.0838.9175.1193.2485.9685.2270.0087.34
    $ \checkmark$86.2581.9363.9938.2374.7693.1684.8385.4769.1087.13
    $ \checkmark$$ \checkmark$86.5681.6064.0138.5974.9393.0285.7985.9768.6787.28
    $ \checkmark$$ \checkmark$86.8781.7963.3638.6874.9892.9486.5285.1468.9487.14
    下载: 导出CSV

    表  5  单尺度视觉特征下方法性能对比(%)

    Table  5  Performance comparison of methods under single scale visual feature (%)

    方法单尺度视觉特征Twitter-2015(F1)Twitter-2017(F1)
    MAF区域视觉特征73.4286.25
    MSB图像标签73.4784.32
    ITA视觉标签75.1885.67
    ITA5个视觉描述75.1785.75
    ITA光学字符识别75.0185.64
    MSVSEonly区域视觉特征74.8486.75
    MSVSEonly视觉标签74.6687.17
    MSVSEonly视觉描述74.5687.23
    MSVSEw/o视觉前缀74.8987.08
    MSVSE(本文)75.1187.34
    下载: 导出CSV

    表  6  不同学习率下方法性能对比(%)

    Table  6  Performance comparison of methods under different learning rates (%)

    学习率($ \times {{10}^{-5}}$)123456
    Twitter-201573.47575.174.874.674.5
    Twitter-201787.186.887.387.587.287.3
    下载: 导出CSV

    表  7  参数量及时间效率对比

    Table  7  Comparison of parameter number and time efficiency

    方法参数量(MB)训练时间(s)验证时间(s)
    MSB122.9745.803.31
    UMGF191.32314.4218.73
    MAF136.09103.396.37
    ITA122.9765.404.69
    UMT148.10156.738.59
    HvpNet143.3470.369.34
    MSVSE(本文)119.2775.817.03
    下载: 导出CSV

    表  8  基于预训练语言模型的MNER方法性能对比(%)

    Table  8  Performance comparison of MNER method based on pre-trained language model (%)

    方法Twitter-2015Twitter-2017
    Glove-BiLSTM-CRF69.1579.37
    BERT-CRF71.8183.44
    BERT-large-CRF73.5386.81
    XLMR-CRF77.3789.39
    Prompting ChatGPT79.3391.43
    MSVSE(本文)75.1187.34
    下载: 导出CSV
  • [1] Moon S, Neves L, Carvalho V. Multimodal named entity recognition for short social media posts. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, USA: NAACL Press, 2018. 852-860
    [2] Lu D, Neves L, Carvalho V, Zhang N, Ji H. Visual attention model for name tagging in multimodal social media. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia: 2018, (1). 1990-1999
    [3] Asgari-Chenaghlu M, Farzinvash M R, FARZINVASH L, Balafar M A, Motamed C. CWI: A multimodal deep learning approach for named entity recognition from social media using character, word and image features. Neural Computing and Applications, 2022, 34(3): 19051922 doi: 10.1007/s00521-021-06488-4
    [4] Zhang Q, Fu J L, Liu X Y, Huang X J. Adaptive co-attention network for named entity recognition in tweets. In: Proceedings of the 32th AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence. New Orleans, USA: AAAI Press, 2018. 5674−5681
    [5] Zheng C M, Wu Z W, Wang T, Cai Y, Li Q. Object-aware multimodal named entity recognition in social media Posts with adversarial learning. IEEE Transactions on Multimedia, 2020, 23: 25202532
    [6] Wu Z W, Zheng C M, Cai Y, Chen J Y, Leung H F, Li Q. Multimodal representation with embedded visual guiding objects for named entity recognition in social media posts. In: Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM, 2020. 1038−1046
    [7] Yu J F, Jiang J, Yang L, Xia R. Improving multimodal named entity recognition via entity span detection with unified multimodal transformer. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Virtual Event: 2020. 3342−3352
    [8] Xu B, Huang S Z, Sha C F, Wang H Y. MAF: A general matching and alignment framework for multimodal named entity recognition. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. Association for Computing Machinery, New York, USA: 2022. 1215−1223
    [9] Wang X W, Ye J B, Li Z X, Tian J F, Jiang Y, Yan M, et al. CAT-MNER: Multimodal named entity recognition with knowledge refined cross-modal attention. In: Proceedings of the IEEE International Conference on Multimedia and Expo. Taipei, China: 2022. 1−6
    [10] Zhang D, Wei S Z, Li S S, Wu H Q, Zhu Q M, Zhou G D. Multi-modal graph fusion for named entity recognition with targeted visual guidance. In: Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2021. 35(16). 14347−14355
    [11] 钟维幸, 王海荣, 王栋, 车淼. 多模态语义协同交互的图文联合命名实体识别方法. 广西科学, 2022, 29(04): 681690

    Zhong Wei-Xing, Wang Hai-Rong, Wang Dong, Che Miao. lmage-text joint named entity recognition method based on multi-modal semantic interaction. Guangxi Sciences, 2022, 29(04): 681690
    [12] Yu T, Sun X, Yu H F, Li Y, Fu K. Hierarchical self-adaptation network for multimodal named entity recognition in social media. Neurocomputing, 2021(439): 1221
    [13] Wang X Y, Gui M, Jiang Y, jia Z X, Bach N, Wang T, et al. ITA: Image-text alignments for multi-modal named entity recognition. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Seattle, USA: Association for Computational Linguistics, 2022. 3176−3189
    [14] Liu L P, Wang M L, Zhang M Z, Qing L B, He X H. UAMNer: Uncertainty aware multi-modal named entity recognition in social media posts. Appl Intell, 2022, 52(4): 41094125 doi: 10.1007/s10489-021-02546-5
    [15] 李晓腾, 张盼盼, 勾智楠, 高凯. 基于多任务学习的多模态命名实体识别方法. 计算机工程, 2023, 49(04): 114119

    Li Xiao-Teng, Zhang Pan-Pan, Gou Zhi-Nan, Gao Kai. Multi-modal named entity recognition method based on multi-task learning. Computer Engineering, 2023, 49(04): 114119
    [16] Wang J, Yang Y, Liu K Y, Zhu Z P, Liu X R. M3S: Scene graph driven multi-granularity multi-task learning for multi-modal NER. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023(31): 111120
    [17] Chen X, Zhang N Y, Li L, Yao Y Z, Deng S M, Tan C Q, et al. Good visual guidance make a better extractor: hierarchical visual prefix for multimodal entity and relation extraction. In: Proceedings of the Association for Computational Linguistics. Seattle, USA: Association for Computational Linguistics, 2022. 1607−1618
    [18] Jia M H Z, Shen L, Shen X, Liao L J, Chen M, He X D, et al. MNER-QG: An end-to-end mrc framework for multimodal named entity recognition with query grounding. AAAI, 2022, 37(7): 80328040
    [19] Sun L, Wang J Q, Su Y D, Weng F S, Sun Y X, Zheng Z W, et al. RIVA: A pre-trained tweet multimodal model based on text-image relation for multimodal ner. In: Proceedings of the 28th International Conference on Computational Linguistics. Virtual Event: 2022. 1852−1862
    [20] Sun L, Wang J Q, Zhang K, Su Y D, Weng F S. RpBERT: A Text-image relation propagation-based BERT model for multimodal ner. In: Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event: 2021. 35(15): 13860−13868
    [21] Xu B, Huang S, Du M, Wang H Y, Song H, Sha C F, et al. Different data, different modalities! reinforced data splitting for effective multimodal information extraction from social media posts. In: Proceedings of the 29th International Conference on Computational Linguistics. Virtual Event: 2022. 1855−1864
    [22] Zhao F, Li C H, Wu Z, Xing S Y, Dai X Y. Learning from different text-image pairs: a relation-enhanced graph convolutional network for multimodal ner. In: Proceedings of the 30th ACM International Conference on Multimedia, Association for Computing Machinery. New York, USA: 2022. 3983−3992
    [23] Zhou B H, Zhang Y, Song K H, Guo W Y, Zhao G Q, Wang W B, et al. A span-based multimodal variational autoencoder for semi-supervised multimodal named entity recognition. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, 2022. 6293−6302
    [24] He K M, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: 2017. 2980−2988
    [25] Vinyals O, Toshev A, Bengio S, Erhan D. Show and tell: a neural image caption generator. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: 2015. 3156−3164
    [26] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: 2016. 770−778
    [27] 王海荣, 徐玺, 王彤, 荆博祥. 多模态命名实体识别方法研究进展. 多模态命名实体识别方法研究进展. 郑州大学学报(工学版), doi: 10.13705/j.issn.1671-6833

    Wang Hai-Rong, Xu Xi, Wang Tong, Jing Bo-Xiang. Research progress of multimodal named entity recognition. Journal of Zhengzhou University (Engineering Science), doi: 10.13705/j.issn.1671-6833
    [28] Li J Y, Li H, Pan Z, Sun D, Wang J H, Zhang W K, et al. Prompting ChatGPT in MNER: Enhanced multimodal named entity recognition with auxiliary refined knowledge. In: Proceedings of the Association for Computational Linguistics: EMNLP 2023. Singapore: 2023. 2787−2802
  • 加载中
计量
  • 文章访问数:  75
  • HTML全文浏览量:  12
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-09-13
  • 录用日期:  2024-02-22
  • 网络出版日期:  2024-04-01

目录

    /

    返回文章
    返回