Differences between revisions 264 and 327 (spanning 63 versions)
Size: 5221
Comment:
|
Size: 2737
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 7: | Line 7: |
||<style="border:0;padding-top:5px;padding-bottom:5px">'''23 September 2019'''|| ||<style="border:0;padding-left:30px;padding-bottom:0px">'''Igor Boguslavsky''' (Institute for Information Transmission Problems, Russian Academy of Sciences / Universidad Politécnica de Madrid)|| ||<style="border:0;padding-left:30px;padding-bottom:5px">'''Semantic analysis based on inference'''  {{attachment:seminarium-archiwum/icon-en.gif|Talk delivered in English.}}|| ||<style="border:0;padding-left:30px;padding-bottom:5px">I will present a semantic analyzer SemETAP, which is a module of a linguistic processor ETAP designed to perform analysis and generation of NL texts. We proceed from the assumption that the depth of understanding is determined by the number and quality of inferences we can draw from the text. Extensive use of background knowledge and inferences permits to extract implicit information.|| ||<style="border:0;padding-left:30px;padding-bottom:0px">Salient features of SemETAP include: || ||<style="border:0;padding-left:30px;padding-bottom:0px">— knowledge base contains both linguistic and background knowledge;|| ||<style="border:0;padding-left:30px;padding-bottom:0px">— inference types include strict entailments and plausible expectations; || ||<style="border:0;padding-left:30px;padding-bottom:0px">— words and concepts of the ontology may be supplied with explicit decompositions for inference purposes; || ||<style="border:0;padding-left:30px;padding-bottom:0px">— two levels of semantic structure are distinguished. Basic semantic structure (BSemS) interprets the text in terms of ontological elements. Enhanced semantic structure (EnSemS) extends BSemS by means of a series of inferences; || ||<style="border:0;padding-left:30px;padding-bottom:15px">— a new logical formalism Etalog is developed in which all inference rules are written.|| |
|
Line 18: | Line 8: |
||<style="border:0;padding-top:5px;padding-bottom:5px">'''7 October 2019''' (NOTE: the seminar will start at 13:00!)|| ||<style="border:0;padding-left:30px;padding-bottom:0px">'''Tomasz Stanisz''' (Institute of Nuclear Physics, Polish Academy of Sciences)|| ||<style="border:0;padding-left:30px;padding-bottom:5px">'''What can a complex network say about a text?'''  {{attachment:seminarium-archiwum/icon-pl.gif|Talk delivered in Polish.}}|| ||<style="border:0;padding-left:30px;padding-bottom:15px">Complex networks, which have found application in the quantitative description of many different phenomena, have proven to be useful in research on natural language. The network formalism allows to study language from various points of view - a complex network may represent, for example, distances between given words in a text, semantic similarities, or grammatical relationships. One of the types of linguistic networks are word-adjacency networks, which describe mutual co-occurrences of words in texts. Although simple in construction, word-adjacency networks have a number of properties allowing for their practical use. The structure of such networks, expressed by appropriately defined quantities, reflects selected characteristics of language; applying machine learning methods to collections of those quantities may be used, for example, for authorship attribution.|| |
|
Line 23: | Line 9: |
||<style="border:0;padding-top:5px;padding-bottom:5px">'''18 November 2019'''|| ||<style="border:0;padding-left:30px;padding-bottom:0px">'''Alexander Rosen''' (Charles University in Prague)|| ||<style="border:0;padding-left:30px;padding-bottom:5px">'''The title of the talk will be available shortly'''  {{attachment:seminarium-archiwum/icon-en.gif|Talk delivered in English.}}|| ||<style="border:0;padding-left:30px;padding-bottom:15px">The summary of the talk will be available shortly.|| |
{{{#!wiki comment |
Line 28: | Line 11: |
||<style="border:0;padding-top:5px;padding-bottom:5px">'''21 November 2019'''|| ||<style="border:0;padding-left:30px;padding-bottom:0px">'''Alexander Rosen''' (Charles University in Prague)|| ||<style="border:0;padding-left:30px;padding-bottom:5px">'''The title of the talk will be available shortly'''  {{attachment:seminarium-archiwum/icon-en.gif|Talk delivered in English.}}|| ||<style="border:0;padding-left:30px;padding-bottom:15px">The summary of the talk will be available shortly.|| |
||<style="border:0;padding-top:5px;padding-bottom:5px">'''2 April 2020'''|| ||<style="border:0;padding-left:30px;padding-bottom:0px">'''Stan Matwin''' (Dalhousie University)|| ||<style="border:0;padding-left:30px;padding-bottom:5px">'''Efficient training of word embeddings with a focus on negative examples'''  {{attachment:seminarium-archiwum/icon-pl.gif|Talk delivered in Polish.}} {{attachment:seminarium-archiwum/icon-en.gif|Slides in English.}}|| ||<style="border:0;padding-left:30px;padding-bottom:15px">This presentation is based on our [[https://pdfs.semanticscholar.org/1f50/db5786913b43f9668f997fc4c97d9cd18730.pdf|AAAI 2018]] and [[https://aaai.org/ojs/index.php/AAAI/article/view/4683|AAAI 2019]] papers on English word embeddings. In particular, we examine the notion of “negative examples”, the unobserved or insignificant word-context co-occurrences, in spectral methods. we provide a new formulation for the word embedding problem by proposing a new intuitive objective function that perfectly justifies the use of negative examples. With the goal of efficient learning of embeddings, we propose a kernel similarity measure for the latent space that can effectively calculate the similarities in high dimensions. Moreover, we propose an approximate alternative to our algorithm using a modified Vantage Point tree and reduce the computational complexity of the algorithm with respect to the number of words in the vocabulary. We have trained various word embedding algorithms on articles of Wikipedia with 2.3 billion tokens and show that our method outperforms the state-of-the-art in most word similarity tasks by a good margin. We will round up our discussion with some general thought s about the use of embeddings in modern NLP.|| }}} |
Natural Language Processing Seminar 2019–2020
The NLP Seminar is organised by the Linguistic Engineering Group at the Institute of Computer Science, Polish Academy of Sciences (ICS PAS). It takes place on (some) Mondays, normally at 10:15 am, in the seminar room of the ICS PAS (ul. Jana Kazimierza 5, Warszawa). All recorded talks are available on YouTube. |
Please see also the talks given in 2000–2015 and 2015–2019. |