[PDF][PDF] Opinion mining with deep recurrent neural networks
Proceedings of the 2014 conference on empirical methods in natural …, 2014•aclanthology.org
Recurrent neural networks (RNNs) are connectionist models of sequential data that are
naturally applicable to the analysis of natural language. Recently,“depth in space”—as an
orthogonal notion to “depth in time”—in RNNs has been investigated by stacking multiple
layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In
this work we apply these deep RNNs to the task of opinion expression extraction formulated
as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs …
naturally applicable to the analysis of natural language. Recently,“depth in space”—as an
orthogonal notion to “depth in time”—in RNNs has been investigated by stacking multiple
layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In
this work we apply these deep RNNs to the task of opinion expression extraction formulated
as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs …
Abstract
Recurrent neural networks (RNNs) are connectionist models of sequential data that are naturally applicable to the analysis of natural language. Recently,“depth in space”—as an orthogonal notion to “depth in time”—in RNNs has been investigated by stacking multiple layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In this work we apply these deep RNNs to the task of opinion expression extraction formulated as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs outperform traditional shallow, wide RNNs with the same number of parameters. Furthermore, our approach outperforms previous CRF-based baselines, including the state-of-the-art semi-Markov CRF model, and does so without access to the powerful opinion lexicons and syntactic features relied upon by the semi-CRF, as well as without the standard layer-by-layer pre-training typically required of RNN architectures.
aclanthology.org