In this article a combination of two modern aspects of games development is considered: (i) the impact of high quality graphics and virtual reality (VR) user adaptation to believe in realness of in-game events by user’s own eyes; (ii) modeling an enemy’s behavior under automatic computer control, called BOT, which reacts similarly to human players. We consider a First-Person Shooter (FPS) game genre, which simulates an experience of combat actions. We describe some tricks to overcome simulator sicknesses in a shooter with respect to Oculus Rift and HTC Vive headsets. We created a BOT model that strongly reduces the conflict and uncertainty in matching human expectations. BOT passes VR game Alan Turing test with 80% threshold of believable human-like behavior.
Lambek calculus is a logical foundation of categorial grammar, a linguistic paradigm of grammar as logic and parsing as deduction. Pentus (2010) gave a polynomial-time algorithm for determining provability of bounded depth formulas in L* , the Lambek calculus with empty antecedents allowed. Pentus’ algorithm is based on tabularisation of proof nets. Lambek calculus with brackets is a conservative extension of Lambek calculus with bracket modalities, suitable for the modeling of syntactical domains. In this paper we give an algorithm for provability in Lb* , the Lambek calculus with brackets allowing empty antecedents. Our algorithm runs in polynomial time when both the formula depth and the bracket nesting depth are bounded. It combines a Pentus-style tabularisation of proof nets with an automata-theoretic treatment of bracketing.
Activities such as clinical investigations (CIs) or financial processes are subject to regulations to ensure quality of results and avoid negative consequences. Regulations may be imposed by multiple governmental agencies as well as by institutional policies and protocols. Due to the complexity of both regulations and activities, there is great potential for violation due to human error, misunderstanding, or even intent. Executable formal models of regulations, protocols and activities can form the foundation for automated assistants to aid planning, monitoring and compliance checking. We propose a model based on multiset rewriting where time is discrete and is specified by timestamps attached to facts. Actions, as well as initial, goal and critical states may be constrained by means of relative time constraints. Moreover, actions may have non-deterministic effects, i.e. they may have different outcomes whenever applied. We present a formal semantics of our model based on focused proofs of linear logic with definitions. We also determine the computational complexity of various planning problems. Plan compliance problem, for example, is the problem of finding a plan that leads from an initial state to a desired goal state without reaching any undesired critical state. We consider all actions to be balanced, i.e. their pre- and post-conditions have the same number of facts. Under this assumption on actions, we show that the plan compliance problem is PSPACE-complete when all actions have only deterministic effects and is EXPTIME-complete when actions may have non-deterministic effects. Finally, we show that the restrictions on the form of actions and time constraints taken in the specification of our model are necessary for decidability of the planning problems.
Abstract In this study we address the problem of automated word stress detection in Russian using character level models and no partspeech-taggers. We use a simple bidirectional RNN with LSTM nodes and achieve the accuracy of 90% or higher. We experiment with two training datasets and show that using the data from an annotated corpus is much more efficient than using a dictionary, since it allows us to take into account word frequencies and the morphological context of the word.
This paper presents recent results of studies in application of sequence-based pattern structures and emerging patterns to analysis of demographic sequences in Russia. This study is performed on data of 11 generations from 1930 till 1984 for the panel of three waves of the Russian part of Generation and Gender Survey, which took place in 2004, 2007, and 2011. The main goal is to develop methods of extracting emerging patterns (EP) with the following restrictions: the obtained patterns need to be (closed) frequent contiguous prefixes of the input sequences. These constraints were required by demographers for proper interpretation and understanding of early life course events that lead to adulthood. To fulfil this requirement we used modified FP-trees based on pattern structures of contiguous prefixes. After extraction of EP we use CAEP(Classification by Aggregating Emerging Patterns) classifier to predict gender of respondents using their demographic sequences of the first life course events. The best results in terms of TPR-FPR have been obtained for large values of minimum growth-rate parameter (with some objects left without classification).
This book constitutes the refereed proceedings of the 28th International Conference on Data Analytics and Management in Data Intensive Domains, DAMDID/RCDL 2016, held in Ershovo, Moscow, Russia, in October 2016.
The 16 revised full papers presented together with one invited talk and two keynote papers were carefully reviewed and selected from 57 submissions. The papers are organized in topical sections on semantic modeling in data intensive domains; knowledge and learning management; text mining; data infrastructures in astrophysics; data analysis; research infrastructures; position paper.
In this paper, we discuss a semi-dense depth map interpolation method based on convolutional neural network. We propose a compact neural network architecture with loss function defined as Euclidean distance in the feature space of VGG-16 neural network used for deep visual recognition. The suggested solution shows state-of-art performance on synthetic and real datasets. Together with LSD-SLAM, the method could be used to provide a dense depth map for interaction purposes, such as creating a first person game in AR/MR or perception module for autonomous vehicle.
Dualization of a monotone Boolean function on a finite lattice can be represented by transforming the set of its minimal 1 values to the set of its maximal 0 values. In this paper we consider finite lattices given by ordered sets of their meet and join irreducibles (i.e., as a concept lattice of a formal context). We show that in this case dualization is equivalent to the enumeration of so-called minimal hypotheses. In contrast to usual dualization setting, where a lattice is given by the ordered set of its elements, dualization in this case is shown to be impossible in output polynomial time unless P = NP. However, if the lattice is distributive, dualization is shown to be possible in subexponential time.
A scalable method for mining graph patterns stable under subsampling is proposed. The existing subsample stability and robustness measures are not antimonotonic according to definitions known so far. We study a broader notion of antimonotonicity for graph patterns, so that measures of subsample stability become antimonotonic. Then we propose gSOFIA for mining the most subsample-stable graph patterns. The experiments on numerous graph datasets show that gSOFIA is very efficient for discovering subsample-stable graph patterns.
In this paper, we present an application for formal concept analysis (FCA) by showing how it can help construct a semantic map for a lexical typological study. We show that FCA captures typological regularities, so that concept lattices automatically built from linguistic data appear to be even more informative than traditional semantic maps. While sometimes this informativeness causes unreadability of a map, in other cases, it opens up new perspectives in the field, such as the opportunity to analyze the relationship between direct and figurative lexical meanings.
Analysis of polyadic data (for example, multi-way tensors and n-ary relations) becomes more and more popular task nowadays. While several datamining techniques exist for (numeric) dyadic contexts, their extensions to the triadic case are not obvious, if possible at all. In this work, we study development of the ideas of Formal Concept Analysis for processing three-dimensional data, namely the so called OAC-triclustering (from Object, Attribute, Condition). Among several known methods, we have reasonably selected the most effective one and used it to propose an algorithm NOAC-triclustering for mining triclusters of similar values in real-valued triadic contexts. We have also proposed a second simple algorithm, Tri-K-Means, based on clustering algorithm K-Means, for the purpose of comparison. The experimental part demonstrates application of the algorithms to both computer-generated and real-world data.
We propose a new mathematical growth model of primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer process using an original mathematical model referred to CoM-IV and corresponding software. The CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of patient survival; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimisation of diagnostic tests. The CoM-IV enables us, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) considering only on primary tumor sizes. Summarising: CoM-IV a) describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.
PRIMARY THERAPY OF EARLY BREAST CANCER
Evidence, Controversies, Consensus
15th St.Gallen International Breast Cancer Conference
Vienna, Austria, 15–18 March 2017
This paper is devoted to mathematical modelling of the progression and stages of breast cancer. The Consolidated mathematical growth Model of primary tumor (PT) and secondary distant metastases (MTS) in patients with lymph nodes MTS (Stage III) (CoM-III) is proposed as a new research tool. The CoM-III rests on an exponential tumor growth model and consists of a system of determinate nonlinear and linear equations. The CoM-III describes correctly primary tumor growth (parameter T) and distant metastases growth (parameter M, parameter N). The CoM-III model and predictive software: a) detect di erent growth periods of primary tumor and distant metastases in patients with lymph nodes MTS; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes MTS; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimisation of diagnostic tests. The CoM-III enables us, for the rst time, to predict the whole natural history of PT and secondary distant MTS growth of patients with/without lymph nodes MTS on each stage relying only on PT sizes.
Analysis of polyadic data (for example n-ary relations) becomes a popular task nowadays. While several data mining techniques exist for dyadic contexts, their extensions to triadic case are not obvious. In this work, we study development of ideas of Formal Concept Analysis for processing three-dimensional data, namely OAC-triclustering (from Object, Attribute, Condition). We consider several similar methods, study relations between their outputs and organize them in an ordered structure.
Nowadays, mind mapping is rather popular educational technique. Like any other learning tools, mind maps became a part of modern educational trends like blended learning and computer-supported collaborative learning. Lots of mind mapping software tools are adopted to teaching and learning routines such as educational content delivery or assessment. This paper focuses on the additional automatic evaluation of digital educational mind maps gained from the existing procedures of assessments. The review of automatic graders which support the evaluation process demonstrates that some systematical work is done in automation grading by comparing students’ mind maps with a template. But lots of questions about automatic mind maps’ scoring by retrieving the data from a scored mind map are still open. This paper introduces the automatic grader for educational mind maps (AGEMM) which acts like a teacher’s assistant and calculates several quantitative metrics. The AGEMM is implemented as a web-service and interacted with mind maps prepared in the Coggle web-service through its API. The AGEMM is adopted to a bachelor course. Results demonstrate that scores from the AGEMM may be transformed to scales or criterial levels which are used to evaluation. Moreover, the AGEMM application revealed several problems and shew lines of development which we discuss in the paper.
Appearance and spreading of massive online open courses (MOOC) start the era of restructuration of the modern-day education. The questions of quality assurance and integration MOOCs with traditional educational processes are quite popular. Even though quality assurance is richly studied area now, a question of how to estimate a change of quality of a repeated course with modifications in its content is still open. The paper reports on the in-progress investigation that introduces dynamic course quality (DCQ) concept and settles a matter of its evaluation by adoption statistical framework of randomized controlled trials to educational data gained from the Stepik MOOC platform. Nowadays, the framework has been adopted to the Introductory Statistics course. Limitations of a method have been reported, requirements for educational data sources have been elaborated.