The Laboratory conducts international and periodic seminars, organizes public lectures and participates in various HSE events.
Information about planned events is published on the announcements page of the laboratory. This page contains the events of the current year. You can find out about the events of previous years on the Events Archive page.
Seminars are open access for everyone.
Admission is free for students, postgraduates, teachers and staff of the Higher School of Economics.
If you need to order a pass to the HSE building, please inform us by e-mail: email@example.com
Topic: "HAO Intelligence for Big Wisdom".
Speaker: Xindong Wu, Key Laboratory of Knowledge Engineering with Big Data (the Ministry of Education of China), Hefei University of Technology, China.
Abstract: We define Big Wisdom with a HAO Intelligence framework, which integrates human intelligence (HI), artificial intelligence (AI) and organizational intelligence (OI) for big data analytics. Big Wisdom starts with big data, discovers Big Knowledge, and facilitates human and machine synergism for complex problem solving. When the HAO Intelligence framework is applied to a regular (non-bigdata) environment, it becomes the well-known PEAS agent structure, and when the knowledge graph relies on domain expertise (rather than Big Knowledge), HAO Intelligence serves as an expert system. We discuss streaming data and streaming features and instantiate HAO Intelligence with a case study to illustrate synergized HAO Intelligence.
Topic: "Informative discourse feature selection for analysis of textual data".
Speaker: Elizaveta Goncharova, International Laboratory for Intelligent Systems and Structural Analysis.
Abstract: The presented research is dedicated to the analysis of the modern pre-trained language models (LMs) and their ability to inject linguistic features, such as discourse, during solving natural language processing tasks.Recent pre-trained LMs have shown state-of-the-art results on a bunch of NLP tasks, however, these models still suffer from the insufficient linguistic representation of a text that eventually leads to a low level of language understanding. In order to improve these models’ performance novel methods of discourse structure encoding have been proposed in the research. The introduced approaches allow us to incorporate discourse features into the LMs explicitly during pre-training or fine-tuning procedures without requiring significant modifications to the model's architecture. We provided the experimental evaluation of the discourse-aware models on various complex NLP tasks which are argumentation classification (AC), question answering (QA), and text summarization, and concluded that the modified models achieve results as good as or better than other discourse-free or more complex discourse-aware models on the well-known NLP benchmarks. Finally, the influence of discourse features on the models' explainability is considered. In the research, we introduced an independent explainability pipeline that is able to reveal relevant text spans based on the discourse relations assigned to them that can be used to explain deep learning models' decisions in supervised NLP tasks.
Topic: "Less is more" based heuristics for Minimum sum-of-squares clustering".
Speaker: Nenad Mladenovich, Professor of the Faculty of Industrial Systems Design, Khalifa University, Abu Dhabi, UAE.
Abstract: The "Less is more" (LIMA) approach to optimization has recently been proposed. Its main idea is to include simplicity, in addition to efficiency and accuracy, in the comparison of the two algorithms. In this report, in addition to classical local searches for solving the problem of unsupervised learning with a minimum sum of squares, i.e. k-means, h-means and j-means, I will also present some new simple heuristics for a huge dataset with several million objects: (i) One-pass k-means and (ii) decomposition/aggregation of k-means. The results of the calculations will show the advantages of including the simplicity criterion in big data analysis.
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.