• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Seminars

29.09.2016
Title: "A Faster Tableau for CTL*
SpeakerMark Reynolds , Head of the School of Computer Science and Software Engineering at The University of Western Australia. 
Abstract: There have been several recent suggestions for tableau systems for deciding satisfiability in the practically important branching time temporal logic known as CTL*. In this talk we present a streamlined and more traditional tableau approach built upon the author's earlier theoretical work. Soundness and completeness results are proved. A prototype implementation demonstrates the significantly improved performance of the new approach on a range of test formulas. We also see that it compares favourably to state of the art, game and automata based decision procedures.

28.09.2016
Title: 
A new rule for LTL tableaux"  
Speaker: Mark Reynolds

Abstract :
Propositional linear time temporal logic (LTL) is the standard temporal logic for computing applications and many reasoning techniques and tools have been developed for it. Tableaux for deciding satisfiability have existed since the 1980s. However, the tableaux for this logic do not look like traditional tree-shaped tableau systems and their processing is often quite complicated.
In this talk, we introduce a novel style of tableau rule which supports a new simple traditional-style tree-shaped tableau for LTL. We prove that it is sound and complete. As well as being simple to understand, to introduce to students and to use, it is also simple to implement and is competitive against state of the art systems. It is particularly suitable for parallel implementations.

15.09.2016
Speaker :
Francesco Kriegel,  Technische Universität Dresden, Germany
Title
Knowledge Acquisition and Discovery in Description Logics with Formal Concept Analysis
Аbstract
Description Logics (DLs) are a family of conceptual languages suitable for knowledge representation and reasoning that have a strong logical foundation for which the decidability and complexity of common reasoning problems is widely explored.  In particular, the reasoning tasks allow for a deduction of implicit knowledge from explicitly stated facts and axioms, and plenty of appropriate algorithms were developed and implemented, e.g., tableaux algorithms, and completion algorithms.  DLs provide the logical underpinning of the Web Ontology Language (OWL) and some of its fragments, which are used for the Semantic Web and Linked Data, e.g., in the medical domain (SNOMED ontology), and in DBpedia as well as Wikidata (structured machine-readable derivations of Wikipedia). Furthermore, DLs allow for the expression of assertional and terminological knowledge, and an interesting problem is the (semi-)automatic deduction of terminological facts, so called general concept inclusions (GCIs), from given assertional facts.  This problem may also be varied by taking into account existing terminological facts.  In this talk, we will explore the application of methods from Formal Concept Analysis (FCA) for tackling this problem for three different settings of the input data.  In particular, we show how canonical implicational bases of suitable formal contexts or closure operators yield bases of GCIs for the input data.  More specifically, we utilize suprema and infima of specific closure operators in the lattice of DL concept descriptions, and enumerate their canonical implicational base in order to generate the hidden terminological knowledge in a sound and complete manner.  There is an algorithm that is able to solve this enumeration problem in a highly parallel way where no communication or synchronization between concurrent threads in necessary.

01.06.2016
Speaker:
 Gerhard Wohlgenannt, assistant professor at the Institute of Information Business of the Vienna University of Economics and Business
Title: The Integration of Crowdsourcing into Knowledge Engineering Workflows; plus current Research Interests: such as Google word2vec
Abstract: The advent of the Web has significantly changed the context of systems that rely on formally specified knowledge, which led to opening up knowledge creation processes and tools to wider groups of contributors.
Further broadening of this process allows large populations of non-experts to create knowledge through the use of crowdsourcing techniques such as games or mechanised labour platforms. Crowdsourcing techniques can provide effective means for solving a variety of ontology engineering (and other knowledge-rich) problems. Yet, crowdsourcing is mainly used as external support to ontology engineering, without being integrated into the work of ontology engineers.


17.05.2016
Speaker:
 Alexander Tuzhilin, Stern School of Business NYU
Title: "Recommending Remedial Learning Materials to the Students by Filling their Knowledge Gaps"
Abstract: A new content-based method of providing recommendations of remedial learning materials to the students will be presented. This method identifies gaps in students’ knowledge of the subject matter in the online courses that they take and provides recommendations of relevant targeted educational materials from the library of assembled learning materials to them in order to close the “gaps” in what the students have learned in the course. The proposed recommendation method is empirically validated using a randomized controlled experiment on the students from an online university. It is shown that the students not only liked the recommendations provided to them by the proposed method, but that these recommendations led to better performance results on the final exams for certain segments of the student body. This is joint work with Konstantin Bauman.
Bio: Alexander Tuzhilin is a Leonard N. Stern Professor of Business and the Chair of the Department of Information, Operations and Management Sciences at the Stern School of Business, NYU. Professor Tuzhilin’s current research interests include personalization, recommender systems and data mining. He has produced more than 100 research publications on these and other topics in various journals, books and conference proceedings. Professor Tuzhilin has served on the organizing and program committees of numerous conferences, including as the Program and as the General Chair of two IEEE International Conferences on Data Mining (ICDM), and as the Conference Chair and as the Chair of the Steering Committee of the ACM Conference on Recommender Systems. He currently serves as the Editor-in-Chief of the ACM Transactions on Management Information Systems.

21.04.2016
Speaker:
 Max Kanovich, professor of computer science at Department of Computer Science and Information Systems, University of London, visiting professor of Faculty of Computer Science,School of Data Analysis and Artificial Intelligence
Title: Can we mitigate the attacks on Distance-Bounding Protocols  by using challenge-response rounds repeatedly?
Abstract: The core of  Distance-Bounding Protocols  is a kind of the challenge-response sub-protocol  to identify the participants, Prover and Verifier.
 The aim of the talk is twofold:
 1. First, for a typical challenge-response protocol, presented here, we give a full probabilistic analysis of an attack ``between ticks'', newly established by Max Kanovich, Tajana Ban Kirigin,   Vivek Nigam, Andre Scedrov, and Carolyn Talcott.Their attack is based on the discrepancy between the  'observable' challenge-response time interval  and the 'actual' challenge-response time interval;  the dicrepancy is caused by inconsistency  between the continuous time in nature and  the discrete time within the computer clock.
 2. Secondly, we challenge a kind of a general belief that  within Distance-Bounding Protocols  Verifier can improve their performance  by means of collecting statistics in a series  of independent challenge-response rounds  aiming to observe an  ``acceptance challenge-response event''  in the majority of the rounds.
The novelty of our approach is that here we get quite surprising results to support such a claim as well as  to disprove it.
Namely, we show that in the case where  Verifier decides to grant the access by the simple majority, the effect of the  repeated challenge-response rounds  can mitigate the attack but only for  the specific values of the probability  of the erroneous decision in one round.
Whereas in the case where Verifier  decides to grant the access by the large  majority (that is, with gaining a specified  level of support which is greater, say two thirds,  than the threshold of one half used for simple  majority) the idea of repeated challenge-response  rounds works perfectly well for our protocol.  In particular, having observed the ``acceptance challenge-response events''  in the two-thirds majority of rounds,  Verifier can establish the desired upper  bounds for the 'actual' challenge-response  time interval but only with the  high probability.
01.02.2016
Speaker
: Stanislav O. Speranski, Scientific Researcher  Laboratory of Logical Systems Sobolev Institute of Mathematics,
Title: Reasoning with probability spaces
Abstract: Since interest in probabilistic logics has been growing during the last three decades, different sorts of computer scientists and mathematicians — such as Halpern, Keisler, Scott, Terwijn, and others — approach the matter in different ways. In this talk I shall present a formal language L for reasoning about probabilities with quantifiers over events (and over reals if needed), arising very natually from Kolmogorov's notion of probability space. I am going to discuss computational and metamathematical aspects of L. As we shall see, L has many nice features that are not shared by the logics of Halpern and others. Among them are:
i. There exists an algorithm for deciding whether a given L-sentence is true in all Lebesgue-like (i.e. atomless) probability spaces or not.
ii. Within this framework we can obtain some new results about the relationship between computability and continuity.
iii. Further — by analogy with the case of Boolean algebras — we can define `elementary' invariants of probability spaces.
Thus (i) generalises Tarski's famous result that the first-order theory of the ordered field of reals is decidable (which has recently been used to provide computational complexity bounds for certain statistical problems). (ii) leads to a deeper understanding of the above-mentioned relationship (which plays an important role in computing with continuous data and various parts of constructive mathematics). Finally (iii) gives an analogue of the famous elementary classification of Boolean algebras — this throws new light on foundational issues in reverse mathematics (which is an influential research programme in logic and computability). Some references
(2015). Quantifying over events in probability logic: an introduction.Mathematical Structures in Computer Science. Cambridge: Accepted.
(2013). Complexity for probability logic with quantifiers over propositions.Journal of Logic and Computation 23:5, 1035–1055. Oxford.
R. Fagin, J.Y. Halpern and N. Megiddo (1990). A logic for reasoning about probabilities.
Information and Computation 87:1–2, 78–128. Elsevier.

17.12.2015
Title: Ontology-based data access: succinctness and complexity
Speaker
: Michael Zakharyaschev, professor of computer science at Department of Computer Science and Information Systems, University of London, Birkbeck.
Abstract: Ontology-based data access and management (OBDA) is a popular paradigm of organising access to various types of data sources that has been developed since the mid 2000s. In a nutshell, OBDA separates the user from the data sources (relational databases, triple stores, etc.) by means of an ontology, which provides the user with a convenient query vocabulary, hides the structure of the data sources, enriches incomplete data with background knowledge, and supports queries to multiple and possibly heterogeneous data sources. A key concept of OBDA is first-order rewritability, which reduces the problem of answering ontology-mediated queries (OMQs) to standard database query evaluation. 
The aim of this talk is to give an introduction to OBDA via query rewriting, discuss its computational costs, and real-world applications. In particular, we consider two fundamental computational problems in OBDA with the W3C standard ontology language OWL 2 QL - the succinctness problem for first-order rewritings, and the complexity problem for OMQ evaluation - and show how these problems are related to classical circuit complexity and database query evaluation  

12.11.2015
Title 
:  Data mining problems in computational mass spectrometry
Speaker : Attila Kertesz-Farkas, Associate Professor, Faculty of Computer Science, School of Data Analysis and Artificial Intelligence

Abstract: Tandem mass spectrometry has been extensively used to determine the amino acid sequence of a protein molecule. Amino acids are the building blocks of protein molecules, traditionally, there are 20 of them, and knowing the amino acid sequences of proteins gives us important insight to the function and structure of the protein molecule. A protein first in vitro has been cut into smaller pieces, called peptides, to avoid experimental complexity. A mass spectrometer is then used to measure the mass distribution (spectrum) of the peptide fragments. These spectra, generated, can be considered as fingerprint of the peptide molecule. Computational programs are then applied to identify the amino acid sequence of proteins from mass spectrum data. In this presentation, I will give a general introduction to the computational challenges that scientist faces every day in mass spectrometry data analysis.

08.10.2015
Title
:  A Characterization of the Single-Peaked Domain
Speaker: Prof. Dr. Clemens Puppe, Chair, Economic Theory Karlsruhe Institute of Technology (KIT) 
Abstract:  It is proved that, among all restricted preference domains that guarantee consistency (i.e. transitivity) of pairwise majority voting, the single-peaked domain is the only minimally rich and connected domain that contains two completely reversed strict preference orders. It is argued that this result explains the predominant role of single-peakedness as a domain restriction in models of political economy and elsewhere over the last seven decades. Several intermediate steps of the proof shed further light on the structure of restricted domains that guarantee transitivity of pairwise majority voting, among them the result that a single-crossing (`order-restricted´) domain can be minimally rich only if it is a subdomain of a single-peaked domain.

29.09.2015
Title:
"Formal Concept Analysis: A Useful Example of Modern Mathematics"
Speaker: Prof. Bernard Ganter,Technische Universität Dresden
Abstract: We give impressions of a field of research that was started some 30 years ago under the name of “Formal Concept Analysis.” It is a mathematical theory, based on algebraic structures such as complete lattices, with a spectrum of applications in data analysis and knowledge processing
Although the theory is advanced and substantial, it finds more attention in computer science than in mathematics. Indeed, the field has some potential for supporting recent developments of structured knowledge representations.In our lecture, we introduce the very basic notions of the field, show the possibility of expressive graphical representations, discuss some applications and, eventually, demonstrate why the value of this development lies in its mathematical nature

18.05.2015
Title
: "On computing all abductive explanations from a propositional Horn theory"
Speaker: Kazuhisa Makino, Research Institute for Mathematical Sciences (RIMS) at Kyoto University.
Abstract: Abduction is a fundamental mode of reasoning with applications in many areas of AI and Computer Science. The computation of abductive explanations is an important computational problem, which is at the core of early systems such as the ATMS and Clause Management Systems and is intimately related to prime implicate generation in propositional logic. Many algorithms have been devised for computing some abductive explanation, and the complexity of the problem has been well studied. However, little attention has been paid to the problem of computing multiple explanations, and in particular all explanations for an abductive query. We fill this gap and consider the computation of all explanations of an abductive query from a propositional Horn theory, or of a polynomial subset of them. Our study pays particular attention to the form of the query, ranging from a literal to a compound formula, to whether explanations are based on a set of abducible literals and to the representation of the Horn theory, either by a Horn conjunctive normal form (CNF) or model-based in terms of its characteristic models. For these combinations, we present either tractability results in terms of polynomial total-time algorithms, intractability results in terms of nonexistence of such algorithms (unless P = NP), or semi-tractability results in terms of solvability in quasi-polynomial time, established by polynomial-time equivalence to the problem of dualizing a monotone CNF expression. Our results complement previous results in the literature, and refute a longstanding conjecture by Selman and Levesque. They elucidate the complexity of generating all abductive explanations and shed light on related problems such as generating sets of restricted prime implicates of a Horn theory. The algorithms for tractable cases can be readily applied for generating a polynomial subset of explanations in polynomial time.

22.04.2015
Speaker
Umnov Alexey Vitalievitch, lector of the big data  and information search department of Computer Science faculty NRU HSE
Title: Use of sparced representations for working with Gibbs effect on images

Abstract: Sparsed representations is an approach to learning representations widely used in image processing. The main idea is to convert the initial data in such a way that for the representations of every object used only a small subset of basic elements. In many tasks in image processing it makes the representation sustained and informative. One of such tasks is the definition of images with Gibbs effect, artefact, which appears in the distortion of information loss in high definition images.

15.04.2016
Speaker
: Zaitsev Alexey, PhD student of IITP RAS, reseacher of IITP RAS of the intellectual data analysis and modelling department.
Title: Consolidation of the different precision data with the use of Gaussian models

Abstract: Often in engineering practice there is a task of consolidation the different precision data: it is necessary to build a regression model in the case that besides the sample of the objects of the accurate function there is another sample of a bigger size which contains a rough function. The seminar contains two parts. In the first part there is a discussion of the theoretical regression attributes basing on the Gaussian processes and the theorem about the Bayesian procedure for evaluation of the Gaussian process is proved. In the second part the algorithm is described which is applied for the big samples of the different precision data with the regression аппарат basing on the Gaussian processes.

09.04.2015
Speaker
Neznanov Alexey Andreevitch, Senior Research Fellow of the Faculty of Computer Science ( Laboratory for Intelligent Systems and Structural Analysis)

Title: Evidence-based medicine and intelligent data analysis in clinical informatics using FCART system.
AbstractEvidence-based medicine (EBM) is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. Formally, EMB is the methodology of medical practice based on the explicit check of the efficiency of any clinical action. One of the most powerful research tools for generating reliable evidence about clinical actions and protocols is randomized controlled trial (RCT). Methodology of RCT based on Medical statistics as mathematical foundation of design and analysis of experiments.
Last years we can see a revolution in clinical data analysis associated with the advances in medical informatics. Clinical informatics (as a part of medical informatics that supports activity of the clinicians) "tied up" with RCT. We have a question: how EBM philosophy and RCT methodology is linked with technologies of intelligent data analysis.
The seminar will focus on:
1. EBM philosophy,
2. RCT methodology and specific properties of medical statistics,
3. Differences between RCT results and other medical data,
4. Role of clinical decision support systems (CDSS),
5. Actual tasks of clinical informatics and problems in integration of medical information systems,
6. Use of software tool Formal Concept Analysis Research Toolbox (FCART) in clinical data analysis.

26.03.2015
Speaker
: Panov Alexandr Igorevitch, Institute for Systems Analysis of Russian Academy of Sciences 

Title: Research of the figurative and procedure components of the picture of the world.
AbstractThe report discusses the properties of the descriptive and procedure components of the picture of the world. There are dynamic and hierarchical operators of recognition, examined the correctness of the linear circuit. Procedure component is described with the system of rules.

18.03.2015
Speaker
: Dmitry Alexandrov, PhD student of the mathematical theory of intellectual system department in the mechanical-mathematical faculty MSU

Title: Difficulty of the recognition of belonging a word to a regular language, determined by a set of regular expressions
AbstractThe report is about the basic types of finite automats, used for search of regular expressions: simple DNA, with modified transition and automats with widened conditions. Also offers an approach for modification of one of the classes of regular expressions for lowering the automat difficulty of recognition. 

12.03.2015
Speaker
: Yury Kashnitsky, PhD student NRU HSE
Title: Methods of similarity of graphs definition
Abstract: The report is devoted to comparative analysis of the methods of recognition of the similarity between the graphs. This question appears in many applications of data analysis: social network analysis, biological networks analysis, image processing, research of the chemical agents properties and computer vision.

29.01.2015
Speaker
: Artem Revenko, НИУ ВШЭ, TU Dresden, TU Wien

Title: Automatic construction of implication theories
Abstract: In this talk we consider automation of constructing implicative dependencies between mathematical instances. For this purpose a robust active learning technique called Attribute Exploration is used. The technique collects object-attribute pairs from an expert and builds a concise representation of implicative dependencies (implication basis) between the attributes. The implication basis possesses the "completeness" property, i.e. every valid implication can be deduced from the basis. Two mathematical domains were considered as applications. For algebraic identities it was possible to automatically construct the valid implication basis for all pairwise non-equivalent identities of size up to 5. It was not possible to accomplish this construction using only finite algebras and due to this fact heuristics and an algorithm for finding appropriate algebras over an infinite universe were introduced. For the commuting functions it was possible to automatically construct the implication basis for the functions defined on a domain of size 3, therefore repeating the most advanced result in this scientific field. It was necessary to modify the Attribute Exploration technique as the functions are objects and attributes at the same time, and to introduce an efficient algorithm for finding commuting functions.

 

Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.