Distributional Semantics

What is distributional semantics?

Distributional Semantics is about building a totally unsupervised framework for computational semantics. It addresses traditional computational semantics problems like lexical ambiguity and variability, word sense disambiguation and lexical substitutability, paraphrasing, frame induction and parsing, and textual entailment. Our methodology is to avoid using rule based systems and hand labeled data for supervised learning. The goal is to build a semantic analyzer able to self-adapt to new domains and languages after unsupervised learning from large corpora of raw text. At the same time, the output of distributional semantics is a contextual thesaurus, representing sense clusters and properties characterizing each cluster. Finally, a mayor goal of the Distributional Semantics framework is to map induced linguistic knowledge to existing knowledge bases, such as for example semantic web data and databases, allowing entity linking and disambiguation with respect to pre-conceptualized domain models and enabling a new range of applications.

What is the theory and technology behind distributional semantics?

Distributional semantics is based on very well assessed linguistic theories and on a radical machine learning approach. It has its roots in De Saussure’s structural linguistics hypothesis and in the semiotic principles distinguishing expressions from meaning and reference. Structural semantics claims that meaning can be fully defined by semantic oppositions and relations between words, and in particular syntagmatic and paradigmatic relations. Paradigmatic relations are established in absentia and represents substitubility between words preserving meaning, whereas syntagmatic relations are mostly syntactic relations that can be identified by a syntactic parser. The distributional hypothesis, formulated by Zelling S. Harris claims that paradigmatic relations can be detected by mining distributional properties of syntagmatic relations, allowing us to acquire paradigmatic relations in a fully unsupervised way.
On the other hand, unsupervised learning and complex system are part of the Distributional Semantic framework. We are targeting algorithms that can be parallelized and executed in large computer clusters for scalability. In this way, we are going to build local models of semantic relations rather than global models, which allows to computation to be parallelized and executed using search engine technology like inverse indices and MapReduce.

Leave a Reply

Your email address will not be published. Required fields are marked *