Abstract: Natural Language Inference (NLI) is the task of identifying inferential relationships between a premise p and a given hypothesis h. Both, p and h are expressed in natural language, typically as a pair of sentences with a specific relationship between them coming from a limited set of inferential relations (e.g. entailment, contradiction, neutral). NLI requires semantic analyses and, therefore, can be seen as a test of text understanding capabilities of a system. Modern NLI models are based on deep neural nets and either cross-sentential encoding or independent sentence embeddings. In this talk, I will present our work on sentence representation learning and its appliction to common NLI benchmarks. I will start with the introduction of a state-of-the-art supervised NLI model with hierarchical bi-LSTM architectures and, after that, discuss our research in multilingual supervision for representation learning. The latter is motivated by the use of translations as semantic mirrors and the idea of applying highly multilingual data sets in neural machine translation to learn language-independent meaning representations.
Speaker: Jörg Tiedemann
Affiliation: Professor of Language Technology, University of Helsinki
Place of Seminar: Seminar Room T6, Konemiehentie 2, Aalto University