Semi-supervised recursive autoencoders for predicting sentiment distributions
From Brede Wiki
|Conference paper (help)|
|Semi-supervised recursive autoencoders for predicting sentiment distributions|
|Authors:||Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, Christopher D. Manning|
|Citation:||Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing : 151-161. 2011 July|
|Publisher:||Association for Computational Linguistics|
|Meeting:||2011 Conference on Empirical Methods in Natural Language Processing|
|Database(s):||Google Scholar cites|
|Web:||DuckDuckGo Bing Google Yahoo! — Google PDF|
|Article:||Google Scholar PubMed|
|Restricted:||DTU Digital Library|
Each word is projected into a subspace spanning 100 dimension. From the data in this subspace the neighboring words are autoencoded. An entire sentence is hierarchical represented in a tree by representing word pairs with a latent variable or representing one word and one latent variable or two latent variables with a new latent variable. The weights between the layers are the same. Which pair of words to first merge is selected based on lowest reconstruction error.
Learning is by backpropagation.
- Experience project (Potts, 2010). Data set available at http://www.socher.org/index.php/Main/Semi-SupervisedRecursiveAutoencodersForPredictingSentimentDistributions
- Movie reviews