English Lexical Simplification based on Pretrained Language Representation Modeling
-
摘要: 词语简化(Lexical simplification, LS)是将给定句子中的复杂词替换成意义相等的简单替代词,从而达到简化句子的目的. 已有的词语简化方法只依靠复杂词本身而不考虑其上下文信息来生成候选替换词, 这将不可避免地产生大量的虚假候选词. 为此, 提出了一种基于预训练表示模型BERT的词语简化方法BERT-LS, 利用BERT进行候选替换词的生成和排序. BERT-LS在候选词生成过程中, 不仅不需要任何语义词典和平行语料, 而且能够充分考虑复杂词本身和上下文信息产生候选替代词. 在候选替代词排序过程中, BERT-LS采用了五个高效的特征, 除了常用的词频和词语之间相似度特征之外, 还利用了BERT的预测排序、基于BERT的上下文产生概率和复述数据库PPDB这三个新特征. 通过三个基准数据集进行验证, BERT-LS取得了明显的进步, 整体性能平均比最先进的方法准确率高出29.8%.Abstract: Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning, so as to simplify the sentence. Recently unsupervised lexical simplification approaches only rely on the complex word itself regardless of the given sentence to generate candidate substitutions, which will inevitably produce a large number of spurious candidates. Therefore, we present a lexical simplification approach BERT-LS based on pretrained representation model BERT, which exploits BERT to generate substitute candidates and rank candidates. In the step of substitute generation, BERT-LS not only does not rely on any linguistic database and parallel corpus, but also fully considers both the given sentence and the complex word during generating candidate substitutions. In the step of substitute ranking, BERT-LS employs five efficient features, including BERT's prediction order, BERT-based language model and the paraphrase database PPDB, in addition to the word frequency and word similarity commonly used in other LS methods. Experimental results show that our approach obtains obvious improvement compared with these baselines, outperforming the state-of-the-art by 29.8 Accuracy points on three well-known benchmarks.
-
Key words:
- Lexical simplification /
- Substitution generation /
- Substitution ranking /
- BERT
-
图 1 三种词语简化方法产生的候选替换词进行对比. 给定一个句子 “John composed these verses.”和复杂词“composed”、“verses”, 每个复杂词的前三个简化候选词由BERT-LS、PaetzoldNE[16]和Rec-LS[17]生成.
Fig. 1 The candidate replacement words generated by the three lexical simplification methods are compared. Give a sentence “John composed these verses.” and complex words “composed”, “verses”, The first three simplified candidates for each complex word were generated by BERT-LS, PaetzoldNE[16] and Rec-LS[17].
图 2 BERT-LS使用BERT模型生成候选词, 输入为“the cat perched on the mat.” [CLS]和[SEP]是BERT中的两个特殊符号, 输入的两个句子由[CLS]开始, 使用[SEP]分割两个句子
Fig. 2 BERT-LS uses the BERT model to generate candidate words. The input is “the cat perched on the mat.” [CLS] and [SEP] are two special symbols in BERT. The two sentences start from [CLS], and use [SEP] to split two sentences
表 1 候选词生成过程评估结果
Table 1 Evaluation results of candidate word generation process
LexMTurk BenchLS NNSeval 精确率 召回率 F值 精确率 召回率 F值 精确率 召回率 F值 Yamamoto 0.056 0.079 0.065 0.032 0.087 0.047 0.026 0.061 0.037 Biran 0.153 0.098 0.119 0.130 0.144 0.136 0.084 0.079 0.081 Devlin 0.164 0.092 0.118 0.133 0.153 0.143 0.092 0.093 0.092 Horn 0.153 0.134 0.143 0.235 0.131 0.168 0.134 0.088 0.106 Glavaš 0.151 0.122 0.135 0.142 0.191 0.163 0.105 0.141 0.121 PaetzoldCA 0.177 0.140 0.156 0.180 0.252 0.210 0.118 0.161 0.136 PaetzoldNE 0.310 0.142 0.195 0.270 0.209 0.236 0.186 0.136 0.157 Rec-LS 0.151 0.154 0.152 0.129 0.246 0.170 0.103 0.155 0.124 BERT-Single 0.253 0.197 0.221 0.176 0.239 0.203 0.138 0.185 0.158 BERT-LS 0.306 0.238 0.268 0.244 0.331 0.281 0.194 0.260 0.222 表 2 整个简化系统评估结果
Table 2 Evaluation results of the whole simplified system
LexMTurk BenchLS NNSeval 精确率 准确率 精确率 准确率 精确率 准确率 Yamamoto 0.066 0.066 0.044 0.041 0.444 0.025 Biran 0.714 0.034 0.124 0.123 0.121 0.121 Devlin 0.368 0.366 0.309 0.307 0.335 0.117 PaetzoldCA 0.578 0.396 0.423 0.423 0.297 0.297 Horn 0.761 0.663 0.546 0.341 0.364 0.172 Glavaš 0.710 0.682 0.480 0.252 0.456 0.197 PaetzoldNE 0.676 0.676 0.642 0.434 0.544 0.335 Rec-LS 0.784 0.256 0.734 0.335 0.665 0.218 BERT-Single 0.694 0.652 0.495 0.461 0.314 0.285 BERT-LS 0.864 0.792 0.697 0.616 0.526 0.436 表 3 不同特征对候选词排序的影响
Table 3 The influence of different features on the ranking of candidates
LexMTurk BenchLS NNSeval 平均值 精确率 准确率 精确率 准确率 精确率 准确率 精确率 准确率 BERT-LS 0.864 0.792 0.697 0.616 0.526 0.436 0.696 0.615 仅用BERT预测排名 0.772 0.608 0.695 0.502 0.531 0.343 0.666 0.484 去除BERT预测排名 0.834 0.778 0.678 0.623 0.473 0.423 0.662 0.608 去除上下文产生概率 0.838 0.760 0.706 0.614 0.515 0.406 0.686 0.593 去除相似度 0.818 0.766 0.651 0.604 0.473 0.418 0.647 0.596 去除词频 0.806 0.670 0.709 0.550 0.556 0.397 0.691 0.539 去除PPDB 0.840 0.774 0.682 0.612 0.515 0.431 0.679 0.606 表 4 使用不同的BERT模型的评估结果
Table 4 Evaluation results using different Bert models
数据集 模型 候选词生成评估 完整系统评估 精确率 召回率 F值 精确率 准确率 LexMTurk Base 0.317 0.246 0.277 0.746 0.700 Large 0.334 0.259 0.292 0.786 0.742 WWM 0.306 0.238 0.268 0.864 0.792 BenchLS Base 0.233 0.317 0.269 0.586 0.537 Large 0.252 0.342 0.290 0.636 0.589 WWM 0.244 0.331 0.281 0.697 0.616 NNSeval Base 0.172 0.230 0.197 0.393 0.347 Large 0.185 0.247 0.211 0.402 0.360 WWM 0.194 0.260 0.222 0.526 0.436 表 5 LexMTurk数据集中的简化句例. 复杂词用加粗和下划线标记. “标签”由人工注释.
Table 5 Simplified sentences in LexMTurk. Complex words are marked with bold and underline. Labels are manually annotated
句子1: 标签: 生成词: 最终替代词: Much of the water carried by these streams is diverted. Changed, turned, moved, rerouted, separated, split, altered, veered, … transferred, directed, discarded, converted, derived transferred 句子2: 标签: 生成词: 最终替代词: Following the death of Schidlof from a heart attack in 1987, the Amadeus Quartet disbanded. dissolved, scattered, quit, separated, died, ended, stopped, split formed, retired, ceased, folded, reformed, resigned, collapsed, closed, terminated formed 句子3: 标签: 生成词: 最终替代词: …, apart from the efficacious or prevenient grace of God, is utterly unable to… ever, present, showy, useful, effective, capable, strong, valuable, powerful, active, efficient… irresistible, inspired, inspiring, extraordinary, energetic, inspirational irresistible 句子4: 标签: 生成词: 最终替代词: …, resembles the mid-19th century Crystal Palace in London. mimics, represents, matches, shows, mirrors, echos, favors, match suggests, appears, follows, echoes, references, features, reflects, approaches suggests 句子5: 标签: 生成词: 最终替代词: …who first demonstrated the practical application of electromagnetic waves,… showed, shown, performed, displayed suggested, realized, discovered, observed, proved, witnessed, sustained suggested 句子6: 标签: 生成词: 最终替代词: …a well-defined low and strong wind gusts in squalls as the system tracked into… followed, traveled, looked, moved, entered, steered, went, directed, trailed, traced… rolled, ran, continued, fed, raced, stalked, slid, approached, slowed rolled 句子7: 标签: 生成词: 最终替代词: …is one in which part of the kinetic energy is changed to some other form of energy… active, moving, movement, motion, static, motive, innate, kinetic, real, strong, driving… mechanical, total, dynamic, physical, the, momentum, velocity, ballistic mechanical 句子8: 标签: 生成词: 最终替代词: None of your watched items were edited in the time period displayed. changed, refined, revise, finished, fixed, revised, revised, scanned, shortened altered, modified, organized, incorporated, appropriate altered 表 6 LexMTurk数据集中的简化句例. 复杂词用加粗和下划线标记. 标签中存在的候选词用加粗标记.
Table 6 Simplified sentences in LexMTurk. Complex words are marked with bold and underline. Candidate words existing in tags are marked in bold.
句子1: 标签: 生成词: 最终替代词: Triangles can also be classified according to their internal angles, measured here in degrees. grouped, categorized, arranged, labeled, divided, organized, separated, defined, described … divided, described, separated, designated classified 句子2: 标签: 生成词: 最终替代词: …; he retained the conductorship of the Vienna Philharmonic until 1927. kept, held, had, got maintained, held, kept, remained, continued, shared maintained 句子3: 标签: 生成词: 最终替代词: …, and a Venetian in Paris in 1528 also reported that she was said to be beautiful said, told, stated, wrote, declared, indicated, noted, claimed, announced, mentioned noted, confirmed, described, claimed, recorded, said reported 句子4: 标签: 生成词: 最终替代词: …, the king will rarely play an active role in the development of an offensive or …. infrequently, hardly, uncommonly, barely, seldom, unlikely, sometimes, not, seldomly… never, usually, seldom, not, barely, hardly never -
[1] Hirsh DP. What vocabulary size is needed to read unsimplified texts for pleasure? Reading in a Foreign Language, 1992, 8: 689-696 [2] Nation ISP. Learning vocabulary in another language. Ernst Klett Sprachen, 2001 [3] De Belder J, Moens M. Text simplification for children. In: Proceedings of the International Acm Sigir Conference on Research and Development in Information Retrieval//Geneva, Switzerland, 2010: 19−26 [4] Paetzold G, Specia L. Unsupervised lexical simplification for non-native speakers. In: Proceedings of the National Conference on Artificial Intelligence//Phoenix, USA, 2016. 3761−3767 [5] Feng L. Automatic readability assessment for people with intellectual disabilities. In: Proceedings of the ACM Sigaccess Accessibility and Computing, 2009, 93: 84−91 [6] Saggion, Horacio. Automatic text simplification. Synthesis Lectures on Human Language Technologies. American: Morgan & Claypool, 2017.10(1): 1-137 [7] Devlin, Siobhan & Tait, John. The use of a psycholinguistic database in the simplification of text for aphasic readers. Linguistic Databases, 1998 [8] Lesk M. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In: Proceedings of the International Conference on Systems Documentation//New York, USA, 1986.24−26 [9] Sinha R. UNT-SimpRank: Systems for lexical simplification ranking. In: Proceedings of the Joint Conference on Lexical and Computational semantics//Montreal, Canada, 2012.493−496 [10] Leroy G, Endicott J E, Kauchak D, et al. User evaluation of the effects of a text simplification algorithm using term familiarity on perception, understanding, learning, and information retention. Journal of Medical Internet Research, 2013. 15(7): e144 doi: 10.2196/jmir.2569 [11] Biran O, Brody S, Elhadad N, et al. Putting it simply: a context-aware approach to lexical simplification. In: Proceedings of the Meeting of the Association for Computational Linguistics.//Portland, USA, 2011. 496−501 [12] Yatskar M, Pang B, Danescu-Niculescu-Mizil C, et al. For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia. In: Proceedings of the north american chapter of the association for computational linguistics. Los Angeles, California, 2010.365−368 [13] Horn C, Manduca C, Kauchak D, et al. Learning a lexical simplifier using wikipedia. In: Proceedings of the meeting of the association for computational linguistics//Baltimore, USA, 2014.458−463 [14] Glavas G, Stajner S. Simplifying lexical simplification: do we need simplified corpora. In: Proceedings of the international joint conference on natural language processing//Beijing, China, 2015: 63−68 [15] Paetzold G, Specia L. Unsupervised lexical simplification for non-native speakers. In: Proceedings of the National Conference on Artificial Intelligence. Phoenix, USA, 2016: 3761−3767 [16] Paetzold G, Specia L. Lexical simplification with neural ranking. In: Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics//Valencia, Spain, 2017: 34−40 [17] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [Online], available: https://arxiv.org/abs/1810.04805?context=cs, May 24, 2019 [18] Gooding S, Kochmar E. Recursive context-aware lexical simplification. In: Proceedings of the International Joint Conference on Natural Language Processing//Hong Kong, China, 2019: 4852−4862 (in chinese) [19] Coster W, Kauchak D. Simple English Wikipedia: A New Text Simplification Task. In: Proceedings of the Meeting of the Association for Computational Linguistics: Human Language Technologies. DBLP, 2011 [20] Xu W, Napoles C, Pavlick E, et al. Optimizing Statistical Machine Translation for Text Simplification. In: Proceedings of the Transactions of the Association for Computational Linguistics, 2016, 4: 401−415 [21] Nisioi S, Tajner S, Ponzetto S P, et al. Exploring neural text simplification models. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2017 [22] Dong Y, Li Z, Rezagholizadeh M, et al. Editnts: an neural programmer-interpreter model for sentence simplification through explicit editing. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019 [23] Xu W, Callison-Burch C, Napoles C. Problems in Current Text Simplification Research: New Data Can Help. In: Proceedings of the Transactions of the Association for Computational Linguistics, 2015, 3(1): 283−297 [24] Shardlow M. A survey of automated text simplification. In: Proceedings of the International Journal of Advanced Computer Science and Applications, 2014, 4(1) [25] Paetzold G, Specia L. A survey on lexical simplification. In: Proceedings of the Journal of Artificial Intelligence Research, 2017: 549−593 [26] Pavlick E, Callisonburch C. Simple ppdb: a paraphrase database for simplification. In: Proceedings of the Meeting of the Association for Computational Linguistics//Berlin, Germany 2016: 143−148 [27] Maddela M, Xu W. A word-complexity lexicon and a neural readability ranking model for lexical simplification. In: Proceedings of the Empirical Methods in Natural Language Processing//Brussels, Belgium, 2018: 3749−3760 [28] Leon, Hervas, Gervas, et al. Empirical identification of text simplification strategies for reading-impaired people. In: Proceedings of the Conference of the Association for the Advance of Assistive Technologies in Europe//Maastricht, Netherlands, 2011: 567−574 [29] Lee J, Yoon W, Kim S, et al. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics//Singapore, Singapore, 2019, 36(4): 1234−1240. [30] Lample G, Conneau a. cross-lingual language model pretraining. [Online], available: https://arxiv.org/abs/1901.07291v1, Jan 22, 2019 [31] Mikolov T, Grave E, Bojanowski P, et al. Advances in Pre-Training Distributed Word Representations. [Online], available: https://arxiv.org/abs/1712.09405., Dec 26, 2017 [32] Brysbaert M, New B. Moving beyond kucera and francis: a critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for american english. Behavior Research Methods, Instruments & Computers, 2009, 41(4): 977-990 [33] Ganitkevitch J, Vandurme B, Callison-Burch C. Ppdb: the paraphrase database. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics//Atlanta, USA, 2013 [34] Little D. Common European Framework of Reference for Languages. The TESOL Encyclopedia of English Language Teaching. American Cancer Society, 2018. Gooding S, [35] Kochmar E. Complex word identification as a sequence labelling task. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, 1148−1153 [36] Kajiwara T, Matsumoto H, Yamamoto K, et al. Selecting proper lexical paraphrase for children. In: Proceedings of the International Conference on Computational Linguistics//Copenhagen, Denmark, 2013: 59−7 -

计量
- 文章访问数: 113
- HTML全文浏览量: 17
- 被引次数: 0