Nothing Special   »   [go: up one dir, main page]

×
Please click here if you are not redirected within a few seconds.
Jan 9, 2020 · In this paper, we propose a multiplex word embedding model, which can be easily extended according to various relations among words.
Conventional word embeddings represent words with fixed vectors, which are usu- ally trained based on co-occurrence patterns among words.
This is the source code for EMNLP-IJCNLP 2019 paper "Multiplex Word Embeddings for Selectional Preference Acquisition". The readers are welcome to star/fork ...
As a result, each word has a center embedding to represent its overall semantics, and several relational embeddings to represent its relational dependencies.
Compared to existing models, our model can effectively distinguish words with respect to different relations without introducing unnecessary sparseness.
Multiplex Word Embeddings for Selectional Preference Acquisition. Dong Yu ... Syntagmatic Word Embeddings for Unsupervised Learning of Selectional Preferences.
Oct 30, 2022 · Promoting openness in scientific communication and the peer-review process.
Usage Metrics of. Multiplex Word Embeddings for Selectional Preference Acquisition. Author(s): Zhang, Hongming ; Bai, Jiaxin ; Song, Yan ; Xu, Kun ; Yu ...
Multiplex Word Embeddings for Selectional Preference Acquisition. H Zhang, J Bai, Y Song, K Xu, C Yu, Y Song, W Ng, D Yu. EMNLP-IJCNLP, 5247--5256, 2019. 18 ...
People also ask
Multiplex Word Embeddings for Selectional Preference Acquisition · Selectional Preferences for Semantic Role Classification · A Vector Model for Syntagmatic and ...