Nothing Special   »   [go: up one dir, main page]

Improving Word Embeddings through Iterative Refinement of Word- and Character-level Models

Phong Ha, Shanshan Zhang, Nemanja Djuric, Slobodan Vucetic


Abstract
Embedding of rare and out-of-vocabulary (OOV) words is an important open NLP problem. A popular solution is to train a character-level neural network to reproduce the embeddings from a standard word embedding model. The trained network is then used to assign vectors to any input string, including OOV and rare words. We enhance this approach and introduce an algorithm that iteratively refines and improves both word- and character-level models. We demonstrate that our method outperforms the existing algorithms on 5 word similarity data sets, and that it can be successfully applied to job title normalization, an important problem in the e-recruitment domain that suffers from the OOV problem.
Anthology ID:
2020.coling-main.104
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1204–1213
Language:
URL:
https://aclanthology.org/2020.coling-main.104
DOI:
10.18653/v1/2020.coling-main.104
Bibkey:
Cite (ACL):
Phong Ha, Shanshan Zhang, Nemanja Djuric, and Slobodan Vucetic. 2020. Improving Word Embeddings through Iterative Refinement of Word- and Character-level Models. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1204–1213, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Improving Word Embeddings through Iterative Refinement of Word- and Character-level Models (Ha et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.104.pdf