2020 Volume 28 Pages 715-723
A reverse dictionary maps a description to the word specified by the description. The neural reverse dictionary (NRD) learns to map word embeddings for an input definition into an embedding of the word defined by the definition using neural networks. Such a function encodes phrasal semantics and bridges the gap between them and lexical semantics. However, previous NRD has a limitation in accuracy due to its insufficient capacity. To solve this problem, we used novel combinations of neural networks, which are effective in neural machine translation and image processing, with sufficient capacities. We found that, an LSTM output adjustment by using a multi-layer fully connected network with bypass structures (CFNN) was more effective for reverse dictionary tasks than using more complicated LSTM. BiLSTM+CFNN was comparable to the commercial system OneLook Reverse Dictionary in some metrics, and noised biLSTM+CFNN which we tuned by a noising data augmentation outperformed OneLook Reverse Dictionary in almost all metrics. We also examined the reasons for the success of biLSTM+CFNN and revealed that a bypass structure of the CFNN and balance in the capacity of LSTM and the CFNN contribute to the improved performance of the NRD.