Nothing Special   »   [go: up one dir, main page]

Jump to content

Statistical semantics

From Wikipedia, the free encyclopedia

In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval.

History

[edit]

The term statistical semantics was first used by Warren Weaver in his well-known paper on machine translation.[1] He argued that word sense disambiguation for machine translation should be based on the co-occurrence frequency of the context words near a given target word. The underlying assumption that "a word is characterized by the company it keeps" was advocated by J.R. Firth.[2] This assumption is known in linguistics as the distributional hypothesis.[3] Emile Delavenay defined statistical semantics as the "statistical study of the meanings of words and their frequency and order of recurrence".[4] "Furnas et al. 1983" is frequently cited as a foundational contribution to statistical semantics.[5] An early success in the field was latent semantic analysis.

Applications

[edit]

Research in statistical semantics has resulted in a wide variety of algorithms that use the distributional hypothesis to discover many aspects of semantics, by applying statistical techniques to large corpora:

[edit]

Statistical semantics focuses on the meanings of common words and the relations between common words, unlike text mining, which tends to focus on whole documents, document collections, or named entities (names of people, places, and organizations). Statistical semantics is a subfield of computational semantics, which is in turn a subfield of computational linguistics and natural language processing.

Many of the applications of statistical semantics (listed above) can also be addressed by lexicon-based algorithms, instead of the corpus-based algorithms of statistical semantics. One advantage of corpus-based algorithms is that they are typically not as labour-intensive as lexicon-based algorithms. Another advantage is that they are usually easier to adapt to new languages or noisier new text types from e.g. social media than lexicon-based algorithms are. [21] However, the best performance on an application is often achieved by combining the two approaches.[22]

See also

[edit]

References

[edit]

Sources

[edit]
  • Delavenay, Emile (1960). An Introduction to Machine Translation. New York, NY: Thames and Hudson. OCLC 1001646.
  • Firth, John R. (1957). "A synopsis of linguistic theory 1930-1955". Studies in Linguistic Analysis. Oxford: Philological Society: 1–32.
    Reprinted in Palmer, F.R., ed. (1968). Selected Papers of J.R. Firth 1952-1959. London: Longman. OCLC 123573912.
  • Frank, Eibe; Paynter, Gordon W.; Witten, Ian H.; Gutwin, Carl; Nevill-Manning, Craig G. (1999). "Domain-specific keyphrase extraction". Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence. IJCAI-99. Vol. 2. California: Morgan Kaufmann. pp. 668–673. CiteSeerX 10.1.1.148.3598. ISBN 1-55860-613-0.
  • Furnas, George W.; Landauer, T. K.; Gomez, L. M.; Dumais, S. T. (1983). "Statistical semantics: Analysis of the potential performance of keyword information systems" (PDF). Bell System Technical Journal. 62 (6): 1753–1806. doi:10.1002/j.1538-7305.1983.tb03513.x. S2CID 22483184. Archived from the original (PDF) on 2016-03-04. Retrieved 2012-07-12.
  • Hearst, Marti A. (1992). "Automatic Acquisition of Hyponyms from Large Text Corpora" (PDF). Proceedings of the Fourteenth International Conference on Computational Linguistics. COLING '92. Nantes, France. pp. 539–545. CiteSeerX 10.1.1.36.701. doi:10.3115/992133.992154. Archived from the original (PDF) on 2012-05-22. Retrieved 2012-07-12.
  • Landauer, Thomas K.; Dumais, Susan T. (1997). "A solution to Plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge". Psychological Review. 104 (2): 211–240. CiteSeerX 10.1.1.184.4759. doi:10.1037/0033-295x.104.2.211. S2CID 1144461.
  • Lund, Kevin; Burgess, Curt; Atchley, Ruth Ann (1995). "Semantic and associative priming in high-dimensional semantic space" (PDF). Proceedings of the 17th Annual Conference of the Cognitive Science Society. Cognitive Science Society. pp. 660–665.[permanent dead link]
  • McDonald, Scott; Ramscar, Michael (2001). "Testing the distributional hypothesis: The influence of context on judgements of semantic similarity". Proceedings of the 23rd Annual Conference of the Cognitive Science Society. pp. 611–616. CiteSeerX 10.1.1.104.7535.
  • Pantel, Patrick; Lin, Dekang (2002). "Discovering word senses from text". Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining. KDD '02. pp. 613–619. CiteSeerX 10.1.1.12.6771. doi:10.1145/775047.775138. ISBN 1-58113-567-X.
  • Sahlgren, Magnus (2008). "The Distributional Hypothesis" (PDF). Rivista di Linguistica. 20 (1): 33–53. Archived from the original (PDF) on 2012-03-15. Retrieved 2012-11-20.