167005, Moscow, 11, Pokrovsky boulevard
The paper is focused on the problem of multi-class classification of composite (piecewise-regular) objects (e.g., speech signals, complex images, etc.). We propose a mathematical model of composite object representation as a sequence of independent segments. Each segment is represented as a random sample of independent identically distributed feature vectors. Based on this model and statistical approach we reduce the task to a problem of composite hypothesis testing of segment homogeneity. Several nearest-neighbor criteria are implemented, for some of them the well-known special cases (e.g., the Kullback-Leibler minimum information discrimination principle, the probabilistic neural network) are highlighted. It is experimentally shown that the proposed approach improves the accuracy when compared with contemporary classifiers.
The problem of reconstruction of a word from a set of its subwords is considered. It is assumed that the set is generated by unit shifts of a fixed window along an unknown word. For the problem without constrains on the unknown word, a method of reconstruction is proposed based on the search for Euler paths or Euler cycles in a de Bruijn multidigraph. The search is based on symbolic multiplication of adjacency matrices with special operations of multiplication and addition of edge names. The method makes it possible to find reconstructed words and the number of reconstructions. © 2014 Springer Science+Business Media New York.
Process mining is a research area dealing with, inter alia, the construction of models of various types from event logs. Fuzzy maps are an example of such models produced by different process mining tools, such as ProM and Disco. We proposed a new approach to mining fuzzy models which is based on logs representation in the form of relation databases. Fast and effective SQL queries to such logs are made as a part of a DPMine workflow model. Resulting datasets are processed and visualized by a special DPMine component working tightly integrated with VTMine modeling framework. The paper discusses the suggested approach in the context of customization aspects of VTMine framework with an embedded DPM engine.
Process-aware information systems (PAIS) are systems relying on processes, which involve human and software resources to achieve concrete goals. There is a need to develop approaches for modeling, analysis, improvement and monitoring processes within PAIS. These approaches include process mining techniques used to discover process models from event logs, find log and model deviations, and analyze performance characteristics of processes. The representational bias (a way to model processes) plays an important role in process mining. The BPMN 2.0 (Business Process Model and Notation) standard is widely used and allows to build conventional and understandable process models. In addition to the flat control flow perspective, subprocesses, data flows, resources can be integrated within one BPMN diagram. This makes BPMN very attractive for both process miners and business users. In this paper, we describe and justify robust control flow conversion algorithms, which provide the basis for more advanced BPMN-based discovery and conformance checking algorithms. We believe that the results presented in this paper can be used for a wide variety of BPMN mining and conformance checking algorithms. We also provide metrics for the processes discovered before and after the conversion to BPMN structures. Cases for which conversion algorithms produce more compact or more involved BPMN models in comparison with the initial models are identified.
Consideration was given to optimization of the queue control strategy in the MlGl1l queuing system where decision about continuing or stopping admission of customers is made at the service completion instants of each customer in compliance with the distribution on the set of decisions depending on the number of customers remaining in the system. The mean specific income in the stationary mode was used as the efficiency criterion and the set of permissible strategies coincided with the set of homogeneous randomised Markov strategies. It was proved that if there exists an optimal strategy, then it is degenerate and threshold with one point on control switching, that is, if the number of customers in the system exceeds a certain level, then admission of customers must be stooped or, otherwise, it must be continued.
Process mining techniques aim to analyze and improve conformance and performance of processes using event data. Process discovery is the most prominent process-mining task: A process model is derived based on an event log. The process model should be able to capture causalities, choices, concurrency, and loops. Process discovery is very challenging because of trade-offs between fitness, simplicity, precision, and generalization. Note that event logs typically only hold example behavior and cannot be assumed to be complete (to avoid overfitting). Dozens of process discovery techniques have been proposed. These use a wide range of approaches, e.g., language- or state-based regions, genetic mining, heuristics, expectation maximization, iterative log-splitting, etc. When models or logs become too large for analysis, the event log may be automatically decomposed or traces may be clustered before discovery. Clustering and decomposition are done automatically, i.e., no additional information is used. This paper proposes a different approach where a localized event log is assumed. Events are localized by assigning a non-empty set ofregions to each event. It is assumed that regions can only interact through shared events. Consider for example the mining of software systems. The events recorded typically explicitly refer to parts of the system (components, services, etc.). Currently, such information is ignored during discovery. However, references to system parts may be used to localize events. Also in other application domains, it is possible to localize events, e.g., communication events in an organization may refer to multiple departments (that may be seen as regions). This paper proposes a generic process discovery approach based on localized event logs. The approach has been implemented in ProMand experimental results show that location information indeed helps to improve the quality of the discovered models.
Conditions for a sulfate method for the synthesis of a metastable modification, which has been previously described as ‘‘Z-TiO2’’ (Dadachov, 2006), were found and the stability region of this phase in the hydrolysis temperature–hydrolysis duration coordinates was determined. Investigation by a number of methods (X-ray powder diffraction, a differential dissolution method, thermogravimetry, IR spectroscopy, Raman spectroscopy) showed that the Z-phase is not a polymorph of TiO2 but is a pseudo-polymorph of titanium dioxide hydrate. It was demonstrated that nanoparticles of the lowtemperature Z-phase consist of the [TiO2xmH2O] core, the structure of which can be described as a superstructure in relation to anatase, and an amorphous shell containing TiO2x (trace amount), OH, HSO4 and water. The average crystallite size depends on the ratio of the constituents.
This paper explains how people responding to our survey, which included users’ basic information, social status, experience with social networking and attitude towards social network-integrated e-health information systems. The survey findings show that social media users need special recommendation and guidance services—especially those people located in urban centers that have busy schedules. These people prefer to receive recommendations for their minor health problems over having to go to the hospital or clinic and spend time waiting, perhaps even to return home without a proper consultation from a doctor. As a result, we propose to work on architecture for integrated social media analytics and e-health information systems. However, our findings, being the result of a controlled survey, raise issues such as respondent trust and security and privacy issues relating to healthcare.
Usage multimedia technologies activate teaching process, increases
students' interest to study discipline and productivity of educational process,
deeply allows understanding of a teaching material. New standards have requirements not only to the quality of education, but also to the conditions which
are necessary in university. Therefore, most of teachers have been trained and
are ready to work with the new technology.
Climate Wikience is a desktop application for
fast 3D visualization and analysis of retrospective climate
reanalysis and Earth remote sensing data. For its several
distinct features and certain tasks an analyst may prefer it
to other tools. The features include rich collection of
environmental variables readily available “out-of-the-box’,
one-click 3D visualization of any of them regardless of their
storage formats and coordinate systems, tight R integration
from which the data are accessible in native R data types.
Large volumes of the data are stored on computer cluster by
ChronosServer – new distributed file-based raster database.
Only small portions of the data required for visualization
and analysis are automatically delivered over the Internet to
Climate Wikience in near real-time. The paper surveys the
data available from Climate Wikience, its visualization
capabilities and air pollution analysis as a case study.
Climate Wikience is free for download at www.wikience.org