• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Seminars 2019

10.09.2019

Joint session of Colloquium Faculty of Computer Science with workshop “Mathematical Models of Information Technologies”
Title:  "From Theorem Proving to Cognitive Reasoning"
Speaker: Ulrich Furbach, University of Koblenz / wizAI GmbH From Theorem Proving to Cognitive Reasoning
Abstract : Starting from a depiction of the state of the art in predicate logic theorem proving we address problems which occur if provers are applied in the wild.  In particular we discuss how automated reasoning systems can be used for natural language question answering. Our approach to take common sense reasoning benchmarks within the Corg project (http://corg.hs-harz.de) is presented and we demonstrate how word embeddings can help with the problem of axiom selection.
Venue: Pokrovsky boulevard, 11, room S902
Time: 16.40–18.00

 

26.08.2019
Title:  «Collaborative Conceptual Exploration», «Discovering Implicational Knowledge in Wikidata»
Speaker: Tom Hanika  (Berlin School of Library and Information Science, Humboldt-Universität zu Berlin/Knowledge & Data Engineering Group, University of Kassel, Germany)
Abstract : 15:10 «Collaborative Conceptual Exploration»
In domains with high knowledge distribution, a natural objective is to create principle foundations for collaborative interactive learning environments. We present in this talk a first mathematical characterization of a collaborative learning group, a consortium, based on closure systems of attribute sets and the well-known attribute exploration algorithm from Formal Concept Analysis. To this end, we introduce (weak) local experts for subdomains of a given knowledge domain. These entities are able to refute and potentially accept a given (implicational) query for some closure system that is a restriction of the whole domain. On this, we build up a consortial expert and show first insights about the ability of such an expert to answer queries. Furthermore, we depict techniques on how to cope with falsely accepted implications and on combining counterexamples. Using notions from combinatorial design theory, we further expand those insights as far as providing first results on the decidability problem if a given consortium is able to explore some target domain.
In this talk, we present the results from: Hanika, T., Zumbrägel, J.: Towards Collaborative Conceptual Exploration. In: Chapman, P., Endres, D., and Pernelle, N. (eds.) ICCS. pp. 120–134. Springer (2018).
16:00 «Discovering Implicational Knowledge in Wikidata»
Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Examples include the proprietary knowledge graphs of companies such as Google, Facebook, IBM, or Microsoft, but also freely available ones such as YAGO, DBpedia, and Wikidata. A distinguishing feature of Wikidata is that the knowledge is collaboratively edited and curated. While this greatly enhances the scope of Wikidata, it also makes it impossible for a single individual to grasp complex connections between properties or understand the global impact of edits in the graph.
In this talk, we show an application of methods from Formal Concept Analysis to efficiently identify comprehensible implications that are implicitly present in the data. Although the complex structure of data modeling in Wikidata is not amenable to a direct approach, we overcome this limitation by extracting contextual representations of parts of Wikidata in a systematic fashion. We demonstrate the practical feasibility of our approach through several experiments and show that the results may lead to the discovery of interesting implicational knowledge. Besides providing a method for obtaining large real-world data sets for FCA, we sketch potential applications in offering semantic assistance for editing and curating Wikidata.
In this talk, we report about the results from: Hanika, T., Marx, M., Stumme, G.: Discovering Implicational Knowledge in Wikidata. In: Cristea, D., Ber, F.L., and Sertkaya, B. (eds.) ICFCA. pp. 315–323. Springer (2019).
Venue: Pokrovsky boulevard, 11, room R503
Time: 15:10; 16:00

 



7.06.2019
The 3rd International Workshop "Formal Concept Analysis for Knowledge Discovery"
Topics of Interest: 
·       foundations
·       concept lattices and related structures
·       attribute implications and data dependencies
·       data preprocessing
·       redundancy and dimensionality reduction
·       information retrieval
·       classification
·       clustering
·       association rules and other data dependencies
·       ontologies
Speaker: Rodin A.V.(Institute of Philosophy RAS)
Title: Truth and Justification in Knowledge Representation
 Abstract: While the traditional philosophical epistemology stresses the importance of distinguishing knowledge from true beliefs, the formalisation of this distinction with standard logical means turns out to be problematic. In Knowledge Representation (KR) as a Computer Science discipline this crucial distinction is largely neglected. A practical consequence of this neglect is that the existing KR systems provide their users to knowledge, which they cannot verify and justified by means of the system itself. In terms of the traditional epistemology what such an user gets is certain (possibly true) belief but not knowledge sensu stricto.
Recent advances in the research area at the crossroad of the computational mathematical logic, formal epistemology and computer science open new perspectives for an effective computational realisation of justificatory procedures in KR. After exposing the problem of justification in logic, epistemology and KR, we sketch a novel framework for representing knowledge along with relevant justificatory procedures, which is based on the Homotopy Type theory (HoTT) and supports representation of both propositional knowledge, aka knowledge-that, and non-propositional knowledge, aka knowledge-how or procedural knowledge. The default proof-theoretic semantics of HoTT allows for combining the two sorts of represented knowledge at the formal level by interpreting all permissible constructions as justification terms (witnesses) of associated propositions.

Speaker: Bogatyrev M.Y.,(Tula State University (TSU))
Title:Towards constructing multidimensional formal contexts on natural language texts.

Abstract: Recent success of applying vector-based and graph-based models of text’s semantics demonstrates possible interpretation of semantics as multidimensional notion. In this paper, brief survey of such models is presented and an idea of modeling multidimensional text’s semantics with multidimensional formal contexts is discussed. Several variants of realization of three-dimensional formal contexts with the usage of text’s semantic model of conceptual graphs are presented. Investigations were made on the texts of abstracts of biomedical papers from the PubMed databases.

Speaker: Shelov N.V. (Innopolis University)
Title: Designing ontology for classification and navigation in Computer Languages Universe
Abstract: During the semicentennial history of Computer Science and Information Technologies, several thousands of computer languages have been created. The computer language universe includes languages for different purposes (programming, specification, modeling, etc.). In each of these branches of computer languages it is possible to track several approaches (imperative, declarative, object-oriented, etc.), disciplines of processing (sequential, non-deterministic, distributed, etc.), and formalized models, such as Turing machines or logic inference machines. The listed arguments justify the importance of of an adequate classification for computer languages. Computer language paradigms are the basis for the classification of the computer languages. They are based on joint attributes which allow us to differentiate branches in the computer language universe. We present our computer-aided approach to the problem of computer language classification and paradigm identification. The basic idea consists in the development of a specialized knowledge portal for automatic search and updating, providing free access to information about computer languages. The primary aims of our project are the research of the ontology of computer languages and assistance in the search for appropriate languages for computer system designers and developers. The paper presents our vision of the classification problem, basic ideas of our approach to the problem, current state and challenges of the project, and design of query language (based on combination of temporal, belief, description logics augmented by FCA constructs - derivatives and concepts).

Speaker: Nersisjan S.A. (MSU)
Title: Fitting a mixture of distributions that are close to uniform on boxes
Abstract: Fitting mixture distributions is a widely used clustering approach which finds many applications in various areas like computer science, biology, medicine etc. Since in the most cases there is no exact algorithm for global maximum likelihood (or maximum a posteriori) estimation of mixture distribution parameters, some special local optimization techniques like EM-algorithm are usually utilized.
In this work EM-algorithm was applied to a mixture of generalized Gaussian distributions which played the role of a smooth approximation to the uniform distribution on a box with variable position and edge lengths. One of the advantages of this approach is interpretability: for each of resulting clusters and each data feature the algorithm will output the corresponding range.
The approach proposed can be considered as a generalization of  the previously studied problem of optimal box positioning which can be also formulated as a problem from formal concept analysis, namely, the problem of finding an interval pattern concept of maximum extent size.

 
Speaker: Vinogradov D.V., (Federal Research Center "Informatics and Management" of the Russian Academy of Sciences)
Title: Random similarities computed on GPGPU
Abstract: The paper describes an implementation of a very simple probabilistic algorithm for finding similarities between training examples for General-purpose graphics card (GPGPU) calculations. The algorithm was programmed in OpenCL and its capabilities were investigated using AMD Radeon VII graphics card under Kubuntu Linux 18.04 LTS.

Speaker: Goncharova E.F., (HSE)
Title: Increasing the efficiency of packet classifiers based on closed descriptions.
Abstract: The efficient representation of packet classifiers has become a significant challenge due to the rapid growth of data kept and processed in the forwarding tables. In our work we propose two novel techniques for reducing the size of forwarding tables both in length and width by the elimination of redundant bits and unreachable actions. We consider the task of transferring the forwarding packet to the correct destination as the task of multinomial classification. Thus, the process of reducing the forwarding table size corresponds to feature selection procedure with slight modifications. The presented techniques are based on computation of closed description and building the decision trees for classification. The main challenge in applying decision trees to the task is processing the overlapping rules. To overcome this challenge we propose to imply the JSM hypothesis technique to eliminate the unreachable actions assigned to the overlapping rules. The experiments were held on data generated by the ClassBench software. The proposed approaches result in significant decrease of bits that should be included in the forwarding tables as features.

Speaker: Neznanov A.A., (HSE)
Title: Ontology Based Learning and FCA-based Approach in Automatic Item Generation
Abstract: In the report we discuss modern state of methodologies, methods and tools of an automatic item generation (AIG) for knowledge assessment. The most interested questions are problem of developing specific learning ontologies for AIG optimization, the role of the Semantic Web and other knowledge technology stacks in education, the implementation of adaptive and personalized learning.
We propose specific ontology consisted of a thesaurus, scale definitions, term distinctions and formal contexts linked with thesaurus nodes and scales. Such ontology helps to generate test items and provide an adaptive assessment of learning outcomes on several levels. We also discuss an architecture, requirements and basic components of distributed software system for adaptive learning process support.
 

 


 



 

Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.