Locked History Actions

Diff for "seminar"

Differences between revisions 665 and 767 (spanning 102 versions)
Revision 665 as of 2024-10-22 12:34:20
Size: 11872
Comment:
Revision 767 as of 2025-11-24 11:58:39
Size: 13032
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
= Natural Language Processing Seminar 2024–2025 = = Natural Language Processing Seminar 2025–2026 =
Line 5: Line 5:
||<style="border:0;padding-bottom:10px">The NLP Seminar is organised by the [[http://nlp.ipipan.waw.pjl/|Linguistic Engineering Group]] at the [[http://www.ipipan.waw.pl/en/|Institute of Computer Science]], [[http://www.pan.pl/index.php?newlang=english|Polish Academy of Sciences]] (ICS PAS). It takes place on (some) Mondays, usually at 10:15 am, often online – please use the link next to the presentation title. All recorded talks are available on [[https://www.youtube.com/ipipan|YouTube]]. ||<style="border:0;padding-left:30px">[[seminarium|{{attachment:seminar-archive/pl.png}}]]|| ||<style="border:0;padding-bottom:10px">The NLP Seminar is organised by the [[http://nlp.ipipan.waw.pjl/|Linguistic Engineering Group]] at the [[http://www.ipipan.waw.pl/en/|Institute of Computer Science]], [[http://www.pan.pl/index.php?newlang=english|Polish Academy of Sciences]] (ICS PAS). It will restart in October and will take place on (some) Mondays, usually at 10:15 am, often online – please use the link next to the presentation title. All recorded talks are available on [[https://www.youtube.com/ipipan|YouTube]]. ||<style="border:0;padding-left:30px">[[seminarium|{{attachment:seminar-archive/pl.png}}]]||
Line 7: Line 7:
||<style="border:0;padding-top:5px;padding-bottom:5px">'''7 October 2024'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Janusz S. Bień''' (University of Warsaw, profesor emeritus) ||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[https://www.youtube.com/watch?v=2mLYixXC_Hw|{{attachment:seminarium-archiwum/youtube.png}}]] '''[[attachment:seminarium-archiwum/2024-10-07.pdf|Identifying glyphs in some 16th century fonts: a case study]]''' &#160;{{attachment:seminarium-archiwum/icon-pl.gif|Talk in Polish.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">Some glyphs from 16th century fonts, described in the monumental work “[[https://crispa.uw.edu.pl/object/files/754258/display/Default|Polonia Typographica Saeculi Sedecimi]]”, can be more or less easily identified with the Unicode standard characters. Some glyphs don't have Unicode codepoints, but can be printed with an appropriate !OpenType/TrueType fonts using typographic features. For some of them their encoding remains an open question. Some examples will be discussed.||
||<style="border:0;padding-top:5px;padding-bottom:5px">'''15 September 2025'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Louis Esteve''' (Universite Paris-Saclay) ||
||<style="border:0;padding-left:30px;padding-bottom:5px">'''[[attachment:seminarium-archiwum/2025-09-15.pdf|Diversity and dataset size – a quantitative perspective]]''' &#160;{{attachment:seminarium-archiwum/icon-en.gif|Talk in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">The field of Natural Language Processing (NLP) studies the abilities of computer systems to process and generate natural language, and has received increasing attention from the general population since the democratisation of generative and conversational models. However, behind the scenes, state-of-the-art NLP models are trained on ever-larger datasets, reaching trillions of tokens. It may be argued that the creation and use of such immense datasets is motivated by the idea that 'the larger the dataset, the more diverse it is', and that in turn 'if the training set is more diverse, it shall yield better models'. However, these statements thus far remain intuitions and need to be properly tested. To this end, this presentation will tackle methods and caveats of formal diversity quantification including limitations of the literature, a preliminary discussion on the link between diversity and dataset size, as well as their impact on downstream applications.||
Line 12: Line 12:
||<style="border:0;padding-top:5px;padding-bottom:5px">'''14 October 2024'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Alexander Rosen''' (Charles University in Prague)||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[https://www.youtube.com/watch?v=E2ujmqt7Q2E|{{attachment:seminarium-archiwum/youtube.png}}]] '''[[attachment:seminarium-archiwum/2024-10-14.pdf|Lexical and syntactic variability of languages and text genres. A corpus-based study]]''' &#160;{{attachment:seminarium-archiwum/icon-en.gif|Talk in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:5px">This study examines metrics of syntactic complexity (SC) and lexical diversity (LD) as tools for analyzing linguistic variation within and across languages. Using quantifiable measures based on cross-linguistically consistent (morpho)syntactic annotation ([[https://universaldependencies.org/|Universal Dependencies]]), the research utilizes parallel texts from a large multilingual corpus ([[https://wiki.korpus.cz/doku.php/en:cnk:intercorp:verze16ud|InterCorp]]). Six SC and two LD metrics – covering the length and embedding levels of nominal and clausal constituents, mean dependency distance (MDD), and sentence length – are applied as metadata for sentences and texts.||
||<style="border:0;padding-left:30px;padding-bottom:5px">The presentation will address how these metrics can be visualized and incorporated into corpus queries, how they reflect structural differences across languages and text types, and whether SC and LD vary more across languages or text types. It will also consider the impact of language-specific annotation nuances and correlations among the measures. The analysis includes comparative examples from Polish, Czech, and other languages.||
||<style="border:0;padding-left:30px;padding-bottom:15px">Preliminary findings indicate higher SC in non-fiction compared to fiction across languages, with nominal and clausal metrics being dominant factors. The results suggest distinct patterns for MDD and sentence length, highlighting the impact of structural differences (e.g., analytic vs. synthetic morphology, dominant word-order patterns) and the influence of source text type and style.||
||<style="border:0;padding-top:5px;padding-bottom:5px">'''6 October 2025'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Stan Matwin''' (Dalhousie University / Institute of Computer Science, Polish Academy of Sciences) ||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[https://www.youtube.com/watch?v=hwBs4D7clls|{{attachment:seminarium-archiwum/youtube.png}}]] '''[[attachment:seminarium-archiwum/2025-10-06.pdf|Deep, multi-faceted learning of diagnosing mental disorders from clinical interview records]]''' &#160;{{attachment:seminarium-archiwum/icon-pl.gif|Talk in Polish.}}&#160;{{attachment:seminarium-archiwum/icon-en.gif|Slides partially in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">The key characteristics of mental illnesses are reflected in audio recordings of clinical interviews with patients and their families. We have developed a deep learning method that automatically extracts the relevant features necessary for the diagnosis of mental illnesses (ADHD, depression, bipolar disorder and schizophrenia) from such interviews. We use a variety of pre-trained models to extract representations from both the audio segments of these interviews and their text versions. We use several modern representation techniques (embeddings). We apply a Big Data approach by exploring existing audio and text corpora annotated with emotional labels. We address the problem of annotated data scarcity by using parametric model fine-tuning (Parameter Efficient Fine-Tuning). All these representations are then combined into a single multimodal form. To diagnose the above mental disorders, we use contrastive learning and model synthesis using a committee of experts (Mixture of Experts). The results show that through multimodal analysis of clinical interviews, mental disorders can be diagnosed with satisfactory accuracy (project conducted in collaboration with H. Naderi and R. Uher).||
Line 19: Line 17:
||<style="border:0;padding-top:5px;padding-bottom:5px">'''28 October 2024''' (Note: the talk will take place at 12:00 pm) ||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Rafał Jaworski''' (Adam Mickiewicz University in Poznań)||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''Framework for aligning and storing of multilingual word embeddings for the needs of translation probability computation''' &#160;{{attachment:seminarium-archiwum/icon-pl.gif|Talk in Polish.}}||
||<style="border:0;padding-left:30px;padding-bottom:5px">The presentation will cover my research in the field of natural language processing for computer-aided translation. In particular, I will present the Inter-language Vector Space algorithm set for aligning sentences at the word and phrase level using multilingual word embeddings.||
||<style="border:0;padding-left:30px;padding-bottom:5px">The first function of the set is used to generate vector representations of words. They are generated using an auto-encoder neural network based on text data – a text corpus. In this way vector dictionaries for individual languages are created. The vector representations of words in these dictionaries constitute vector spaces that differ between languages.||
||<style="border:0;padding-left:30px;padding-bottom:5px">To solve this problem and obtain vector representations of words that are comparable between languages, the second function of the Inter-language Vector Space set is used. It is used to align vector spaces between languages using transformation matrices calculated using the singular value decomposition method. This matrix is calculated based on homonyms, i.e. words written identically in the language of space X and Y. Additionally, a bilingual dictionary is used to improve the results. The transformation matrix calculated in this way allows for adjusting space X in such a way that it overlaps space Y to the maximum possible extent.||
||<style="border:0;padding-left:30px;padding-bottom:5px">The last function of the set is responsible for creating a multilingual vector space. The vector space for the English language is first added to this space in its entirety and without modification. Then, for each other vector space, the transformation matrix of this space to the English space is first calculated. The vectors of the new space are multiplied by this matrix and thus become comparable to the vectors representing English words.||
||<style="border:0;padding-left:30px;padding-bottom:15px">The Inter-language Vector Space algorithm set is used in translation support systems, for example in the author's algorithm for automatic transfer of untranslated tags from the source sentence to the target one.||
||<style="border:0;padding-top:5px;padding-bottom:5px">'''20 October 2025'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Arkadiusz Modzelewski''' (University of Padua / Polish-Japanese Academy of Information Technology)||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[https://www.youtube.com/watch?v=KNxm8Vt_wfw|{{attachment:seminarium-archiwum/youtube.png}}]] '''[[attachment:seminarium-archiwum/2025-10-20.pdf|The Why and How of Disinformation: Datasets, Methods and Language Models Evaluation]]''' &#160;{{attachment:seminarium-archiwum/icon-en.gif|Talk in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">What language tools do disinformation agents employ? Can incorporating persuasion and intent knowledge enhance the ability of large language models to detect disinformation? And how effective are LLMs at identifying disinformation in Polish and English? In this talk, I will present findings from my PhD research on disinformation, persuasion, and the intent behind misleading information. I will introduce one of the largest Polish disinformation datasets, alongside a novel English dataset, both designed to capture manipulative techniques and intent of disinformation agents. Drawing on these and other resources, I will discuss how well current LLMs perform in detecting disinformation, persuasion, and intent, and highlight promising directions for improving their effectiveness in disinformation detection.||
Line 28: Line 22:
||<style="border:0;padding-top:5px;padding-bottom:5px">'''4 November 2024'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Jakub Kozakoszczak''' (Deutsche Telekom)||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''ZIML: A Markup Language for Regex-Friendly Linguistic Annotation''' &#160;{{attachment:seminarium-archiwum/icon-en.gif|Talk in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">The summary of the talk will be made available shortly.||
||<style="border:0;padding-top:5px;padding-bottom:5px">'''3 November 2025'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Gražina Korvel''' (Vilnius University) ||
||<style="border:0;padding-left:30px;padding-bottom:5px">'''[[attachment:seminarium-archiwum/2025-11-03.pdf|Developing Speech Corpora for Low-Resource Languages]]''' &#160;{{attachment:seminarium-archiwum/icon-pl.gif|Talk in Polish.}}&#160;{{attachment:seminarium-archiwum/icon-en.gif|Slides in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">Developing diverse, well-annotated speech corpora is essential for training modern machine learning models. This presentation discusses the principles and methodologies involved in creating large-scale speech corpora, with a focus on the Lithuanian language as a case study. It presents the Great Lithuanian Speech Corpus (LIEPA-3) project, outlining strategies for collecting, annotating, and ensuring the quality of data, as well as ensuring balanced representation across dialects, genders, and age groups. The talk also addresses challenges related to ethical data collection and corpus standardization.||
Line 33: Line 27:
||<style="border:0;padding-top:5px;padding-bottom:5px">'''21 November 2024'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Christian Chiarcos''' (University of Augsburg)||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''Aspects of Knowledge Representation for Discourse Relation Annotation''' &#160;{{attachment:seminarium-archiwum/icon-en.gif|Talk in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">The summary of the talk will be made available shortly.||
||<style="border:0;padding-top:5px;padding-bottom:5px">'''24 November 2025'''||
||<style="border:0;padding-left:30px;padding-bottom:5px">'''Jan Eliasz''', '''Mikołaj Langner''', '''Jan Kocoń''' (Wrocław University of Science and Technology) ||
||<style="border:0;padding-left:30px;padding-bottom:0px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''[[attachment:seminarium-archiwum/2025-11-24-1.pdf|Language, Culture, and Ideology: Personalizing Offensiveness Detection in Political Tweets with Reasoning LLMs]]''' &#160;{{attachment:seminarium-archiwum/icon-en.gif|Talk in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:5px">We investigate two complementary strategies for improving the reliability of Large Language Models in classification settings. First, we show that decomposing multi-label classification into a set of independent binary decisions offers clear practical advantages over structured output formulations: it substantially reduces parsing errors, works seamlessly with decoder-only architectures, and delivers faster inference when combined with prefix caching, without requiring any model retraining.||
||<style="border:0;padding-left:30px;padding-bottom:0px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''[[attachment:seminarium-archiwum/2025-11-24-2.pdf|Divide, Cache, Conquer. Dichotomic Prompting for Efficient Multi-Label LLM-Based Classfication]]''' &#160;{{attachment:seminarium-archiwum/icon-en.gif|Talk in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">Second, we demonstrate that reasoning-enabled LLMs are markedly better at tasks requiring contextual sensitivity, such as offensive-language annotation. When prompted to adopt a specific role, reasoning models maintain that role more consistently and make more accurate, fine-grained judgments than their non-reasoning counterparts. Viewed together, these findings highlight a unifying principle: LLMs become both more efficient and more context-aware when their decision process is made more structured, whether through task decomposition or through explicit reasoning.||
Line 38: Line 34:
||<style="border:0;padding-top:5px;padding-bottom:5px">'''2 December 2024'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Participants of !PolEval 2024'''||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''Presentation of the workshop results''' &#160;{{attachment:seminarium-archiwum/icon-pl.gif|Talk in Polish.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">The program will be made available after the contest ends.||
||<style="border:0;padding-top:5px;padding-bottom:5px">'''1 December 2025'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Filip Kucia''', '''Anna Wróblewska''', '''Bartosz Grabek''', '''Szymon Trochimiak''' (Warsaw University of Technology) ||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''How to Make Museums More Interactive? Case Study of the “Artistic Chatbot”''' &#160;{{attachment:seminarium-archiwum/icon-pl.gif|Talk in Polish.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">This presentation examines the challenges of deploying large language model (LLM)-powered chatbots in public cultural spaces, based on our experience with Artistic Chatbot – a voice-based conversational agent used during a month-long art exhibition at the Warsaw Academy of Fine Arts. We focus on two intertwined issues: how to make a system answer questions about a multilingual artistic collection, and how to evaluate the quality of those answers. On the technical side, we discuss strategies for building a retrieval-augmented knowledge base from heterogeneous, multilingual exhibition materials and the trade-offs between native-language models and pivot-language approaches based on translation. From the perspective of interaction design, we outline a fully voice-based setup in a gallery space, in which visitors walk up to a ceiling-mounted microphone and address the system through spoken trigger expressions, without screens or keyboards. The core of the talk is a post-hoc evaluation. We analyse interaction logs and conduct a human annotation study to compare different modelling and retrieval configurations along dimensions such as factual precision, coherence and relevance to the exhibition domain. Using this case study, we ask how to define and measure a “good” answer in conversational AI for cultural heritage, and how choices about language, translation and voice interaction should influence future deployments in museums and galleries.||
Line 43: Line 39:
||<style="border:0;padding-top:5px;padding-bottom:5px">'''19 December 2024'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Piotr Przybyła''' (Pompeu Fabra University / Institute of Computer Science, Polish Academy of Sciences)||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''Adaptive Attacks on Misinformation Detection Using Reinforcement Learning''' &#160;{{attachment:seminarium-archiwum/icon-en.gif|Talk in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">The summary of the talk will be made available shortly.||

||<style="border:0;padding-top:10px">Please see also [[http://nlp.ipipan.waw.pl/NLP-SEMINAR/previous-e.html|the talks given in 2000–2015]] and [[http://zil.ipipan.waw.pl/seminar-archive|2015–2023]].||
||<style="border:0;padding-top:10px">Please see also [[http://nlp.ipipan.waw.pl/NLP-SEMINAR/previous-e.html|the talks given in 2000–2015]] and [[http://zil.ipipan.waw.pl/seminar-archive|2015–2025]].||
Line 51: Line 42:

||<style="border:0;padding-top:5px;padding-bottom:5px">'''17 November 2025''' '''(NOTE: the seminar will start at 16:00)'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Marzena Karpińska''' (Microsoft) ||
||<style="border:0;padding-left:30px;padding-bottom:5px">[[http://zil.ipipan.waw.pl/seminarium-online|{{attachment:seminarium-archiwum/teams.png}}]] '''!OneRuler: testing multilingual language models on long contexts''' &#160;{{attachment:seminarium-archiwum/icon-pl.gif|Talk in Polish}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">In this presentation, I will look at how well language models perform when extracting information from texts of up to 128,000 tokens (approximately 100,000 words) in 26 languages, including Polish. The results of the experiments show that as the length of the context increases, the differences between languages with large and small data resources also increase. Surprisingly, even minimal changes in the command (adding the possibility that the information does not exist) cause a significant decrease in effectiveness, especially with longer texts.||
Line 57: Line 53:

||<style="border:0;padding-top:5px;padding-bottom:5px">'''2 April 2020'''||
||<style="border:0;padding-left:30px;padding-bottom:0px">'''Stan Matwin''' (Dalhousie University)||
||<style="border:0;padding-left:30px;padding-bottom:5px">'''Efficient training of word embeddings with a focus on negative examples''' &#160;{{attachment:seminarium-archiwum/icon-pl.gif|Talk delivered in Polish.}} {{attachment:seminarium-archiwum/icon-en.gif|Slides in English.}}||
||<style="border:0;padding-left:30px;padding-bottom:15px">This presentation is based on our [[https://pdfs.semanticscholar.org/1f50/db5786913b43f9668f997fc4c97d9cd18730.pdf|AAAI 2018]] and [[https://aaai.org/ojs/index.php/AAAI/article/view/4683|AAAI 2019]] papers on English word embeddings. In particular, we examine the notion of “negative examples”, the unobserved or insignificant word-context co-occurrences, in spectral methods. we provide a new formulation for the word embedding problem by proposing a new intuitive objective function that perfectly justifies the use of negative examples. With the goal of efficient learning of embeddings, we propose a kernel similarity measure for the latent space that can effectively calculate the similarities in high dimensions. Moreover, we propose an approximate alternative to our algorithm using a modified Vantage Point tree and reduce the computational complexity of the algorithm with respect to the number of words in the vocabulary. We have trained various word embedding algorithms on articles of Wikipedia with 2.3 billion tokens and show that our method outperforms the state-of-the-art in most word similarity tasks by a good margin. We will round up our discussion with some general thought s about the use of embeddings in modern NLP.||

Natural Language Processing Seminar 2025–2026

The NLP Seminar is organised by the Linguistic Engineering Group at the Institute of Computer Science, Polish Academy of Sciences (ICS PAS). It will restart in October and will take place on (some) Mondays, usually at 10:15 am, often online – please use the link next to the presentation title. All recorded talks are available on YouTube.

seminarium

15 September 2025

Louis Esteve (Universite Paris-Saclay)

Diversity and dataset size – a quantitative perspective  Talk in English.

The field of Natural Language Processing (NLP) studies the abilities of computer systems to process and generate natural language, and has received increasing attention from the general population since the democratisation of generative and conversational models. However, behind the scenes, state-of-the-art NLP models are trained on ever-larger datasets, reaching trillions of tokens. It may be argued that the creation and use of such immense datasets is motivated by the idea that 'the larger the dataset, the more diverse it is', and that in turn 'if the training set is more diverse, it shall yield better models'. However, these statements thus far remain intuitions and need to be properly tested. To this end, this presentation will tackle methods and caveats of formal diversity quantification including limitations of the literature, a preliminary discussion on the link between diversity and dataset size, as well as their impact on downstream applications.

6 October 2025

Stan Matwin (Dalhousie University / Institute of Computer Science, Polish Academy of Sciences)

https://www.youtube.com/watch?v=hwBs4D7clls Deep, multi-faceted learning of diagnosing mental disorders from clinical interview records  Talk in Polish. Slides partially in English.

The key characteristics of mental illnesses are reflected in audio recordings of clinical interviews with patients and their families. We have developed a deep learning method that automatically extracts the relevant features necessary for the diagnosis of mental illnesses (ADHD, depression, bipolar disorder and schizophrenia) from such interviews. We use a variety of pre-trained models to extract representations from both the audio segments of these interviews and their text versions. We use several modern representation techniques (embeddings). We apply a Big Data approach by exploring existing audio and text corpora annotated with emotional labels. We address the problem of annotated data scarcity by using parametric model fine-tuning (Parameter Efficient Fine-Tuning). All these representations are then combined into a single multimodal form. To diagnose the above mental disorders, we use contrastive learning and model synthesis using a committee of experts (Mixture of Experts). The results show that through multimodal analysis of clinical interviews, mental disorders can be diagnosed with satisfactory accuracy (project conducted in collaboration with H. Naderi and R. Uher).

20 October 2025

Arkadiusz Modzelewski (University of Padua / Polish-Japanese Academy of Information Technology)

https://www.youtube.com/watch?v=KNxm8Vt_wfw The Why and How of Disinformation: Datasets, Methods and Language Models Evaluation  Talk in English.

What language tools do disinformation agents employ? Can incorporating persuasion and intent knowledge enhance the ability of large language models to detect disinformation? And how effective are LLMs at identifying disinformation in Polish and English? In this talk, I will present findings from my PhD research on disinformation, persuasion, and the intent behind misleading information. I will introduce one of the largest Polish disinformation datasets, alongside a novel English dataset, both designed to capture manipulative techniques and intent of disinformation agents. Drawing on these and other resources, I will discuss how well current LLMs perform in detecting disinformation, persuasion, and intent, and highlight promising directions for improving their effectiveness in disinformation detection.

3 November 2025

Gražina Korvel (Vilnius University)

Developing Speech Corpora for Low-Resource Languages  Talk in Polish. Slides in English.

Developing diverse, well-annotated speech corpora is essential for training modern machine learning models. This presentation discusses the principles and methodologies involved in creating large-scale speech corpora, with a focus on the Lithuanian language as a case study. It presents the Great Lithuanian Speech Corpus (LIEPA-3) project, outlining strategies for collecting, annotating, and ensuring the quality of data, as well as ensuring balanced representation across dialects, genders, and age groups. The talk also addresses challenges related to ethical data collection and corpus standardization.

24 November 2025

Jan Eliasz, Mikołaj Langner, Jan Kocoń (Wrocław University of Science and Technology)

http://zil.ipipan.waw.pl/seminarium-online Language, Culture, and Ideology: Personalizing Offensiveness Detection in Political Tweets with Reasoning LLMs  Talk in English.

We investigate two complementary strategies for improving the reliability of Large Language Models in classification settings. First, we show that decomposing multi-label classification into a set of independent binary decisions offers clear practical advantages over structured output formulations: it substantially reduces parsing errors, works seamlessly with decoder-only architectures, and delivers faster inference when combined with prefix caching, without requiring any model retraining.

http://zil.ipipan.waw.pl/seminarium-online Divide, Cache, Conquer. Dichotomic Prompting for Efficient Multi-Label LLM-Based Classfication  Talk in English.

Second, we demonstrate that reasoning-enabled LLMs are markedly better at tasks requiring contextual sensitivity, such as offensive-language annotation. When prompted to adopt a specific role, reasoning models maintain that role more consistently and make more accurate, fine-grained judgments than their non-reasoning counterparts. Viewed together, these findings highlight a unifying principle: LLMs become both more efficient and more context-aware when their decision process is made more structured, whether through task decomposition or through explicit reasoning.

1 December 2025

Filip Kucia, Anna Wróblewska, Bartosz Grabek, Szymon Trochimiak (Warsaw University of Technology)

http://zil.ipipan.waw.pl/seminarium-online How to Make Museums More Interactive? Case Study of the “Artistic Chatbot”  Talk in Polish.

This presentation examines the challenges of deploying large language model (LLM)-powered chatbots in public cultural spaces, based on our experience with Artistic Chatbot – a voice-based conversational agent used during a month-long art exhibition at the Warsaw Academy of Fine Arts. We focus on two intertwined issues: how to make a system answer questions about a multilingual artistic collection, and how to evaluate the quality of those answers. On the technical side, we discuss strategies for building a retrieval-augmented knowledge base from heterogeneous, multilingual exhibition materials and the trade-offs between native-language models and pivot-language approaches based on translation. From the perspective of interaction design, we outline a fully voice-based setup in a gallery space, in which visitors walk up to a ceiling-mounted microphone and address the system through spoken trigger expressions, without screens or keyboards. The core of the talk is a post-hoc evaluation. We analyse interaction logs and conduct a human annotation study to compare different modelling and retrieval configurations along dimensions such as factual precision, coherence and relevance to the exhibition domain. Using this case study, we ask how to define and measure a “good” answer in conversational AI for cultural heritage, and how choices about language, translation and voice interaction should influence future deployments in museums and galleries.

Please see also the talks given in 2000–2015 and 2015–2025.