Natural Language Processing Seminar 2024–2025
The NLP Seminar is organised by the Linguistic Engineering Group at the Institute of Computer Science, Polish Academy of Sciences (ICS PAS). It takes place on (some) Mondays, usually at 10:15 am, often online – please use the link next to the presentation title. All recorded talks are available on YouTube. |
7 October 2024 |
Janusz S. Bień (University of Warsaw, profesor emeritus) |
Some glyphs from 16th century fonts, described in the monumental work “Polonia Typographica Saeculi Sedecimi”, can be more or less easily identified with the Unicode standard characters. Some glyphs don't have Unicode codepoints, but can be printed with an appropriate OpenType/TrueType fonts using typographic features. For some of them their encoding remains an open question. Some examples will be discussed. |
4 November 2024 |
Jakub Kozakoszczak (Deutsche Telekom) |
|
The summary of the talk will be made available shortly. |
21 November 2024 |
Christian Chiarcos (University of Augsburg) |
|
The summary of the talk will be made available shortly. |
Please see also the talks given in 2000–2015 and 2015–2023. |
11 March 2024
Mateusz Krubiński (Charles University in Prague)
Talk summary will be made available soon.
2 April 2020
Stan Matwin (Dalhousie University)
Efficient training of word embeddings with a focus on negative examples

This presentation is based on our AAAI 2018 and AAAI 2019 papers on English word embeddings. In particular, we examine the notion of “negative examples”, the unobserved or insignificant word-context co-occurrences, in spectral methods. we provide a new formulation for the word embedding problem by proposing a new intuitive objective function that perfectly justifies the use of negative examples. With the goal of efficient learning of embeddings, we propose a kernel similarity measure for the latent space that can effectively calculate the similarities in high dimensions. Moreover, we propose an approximate alternative to our algorithm using a modified Vantage Point tree and reduce the computational complexity of the algorithm with respect to the number of words in the vocabulary. We have trained various word embedding algorithms on articles of Wikipedia with 2.3 billion tokens and show that our method outperforms the state-of-the-art in most word similarity tasks by a good margin. We will round up our discussion with some general thought s about the use of embeddings in modern NLP.