Follow Datanami:
May 25, 2021

Researchers Evaluate Neural Language Models, Find XLNet Excellent

Lexical substitution is essentially the process that a thesaurus helps humans to perform: replacing words in a sentence without changing the meaning. Now, researchers from the Skolkovo Institute of Science and Technology in Moscow – Skoltech, for short – have completed a groundbreaking, large-scale study to examine how the most advanced neural language models perform when handling lexical substitution tasks.

While simple on paper – humans, of course, find it very easy in their native tongues – lexical substitution is quite complex for artificial intelligences. The substitution can take a variety of nuanced forms: you might replace a word with a hypernym (a word with a broader meaning, like substituting “seat” for “chair”) – or you might substitute a word with a synecdoche, like saying “wheels” to refer to a car. These nuances and abstractions further complicate the already challenging process of artificially deciphering and recreating human language.

Alexander Panchenko – an assistant professor of natural language processing at Skoltech – and colleagues from a variety of research institutions (including HSE University, Lomonosov Moscow State University and Samsung Research Center Russia) set out to evaluate language models for these abilities. Substitution is important for more than just creativity: for instance, it helps models understand the contextual meaning of a word, which, in turn, helps to correct misspellings, or even work toward automatically simplifying writing. So Panchenko’s team measured the models on two fronts: first, their ability to substitute words; and second, their ability to process the contextual meanings of homonyms (e.g. “bat” as in baseball and “bat” as in the animal).

Evaluated models included a variety of language and masked language models (LMs and MLMs), including context2vec, ELMo, BERT, RoBERTa, and XLNet. The battery of tests yielded state-of-the-art results from XLNet through tests on multiple datasets. Furthermore, the researchers observed that large pre-trained language models yielded better results than previous methods of substitution and that incorporating information about the target word substantially improved the quality of the results.

Beyond a straightforward ranking of the models in question, the researchers see a variety of applications for their results.

“First of all, our results in lexical substitution may be useful for language learning (replacing words with their simpler equivalents),” Panchenko said. “Second, it may be useful for augmentation of textual data for training neural networks, as similar augmentation methods are common in computer vision but not so common in text analysis. Another obvious application is writing assistance – automatic suggestion of synonyms and text reformulation.”

To read the paper, click here.