Research group
The research group is led by Prof. Oleg Seleznjev at the Department of Mathematics and Mathematical Statistics .

Stochastic processes concern sequences of events governed by probabilistic laws. Over the last decades there has been a growing interest in stochastic modelling with random functions (stochastic processes and fields). New computational advances are offered by numerical and simulation studies in domains such as statistical estimation, machine learning, data mining, function analysis and approximation, and related disciplines. New modern inference techniques have become effective in dealing with many of the complex stochastic structures which characterize real-life phenomena in physics, bioinformatics, biology, medicine, industry, finance, and networks.

Numerical analysis and simulation of random functions

In many applications with modeling random functions (stochastic processes and fields), only discrete information (the values at discrete time/space points or aggregated information in a number of intervals) is available. Numerical analysis of random functions considers the relevance of the obtained discretized results (or simulations) to an initial continuous model, studies approximation errors for various numerical techniques and classes of random functions. These interdisciplinary problems are investigated by methods of mathematical statistics, approximation theory, computer science. Various approximation methods (e.g., splines, kriging in earth sciences) and numerical problems (stochastic differential equations in financial mathematics, quadratures) are of interest for a single random function. One main aspect regarded in this research is the smoothness in a certain sense of observed random functions; this is crucial for success of function approximation techniques. The close relationship between the smoothness properties of a function and the best rate of its linear approximation is one of the basic ideas of conventional (deterministic) approximation theory. We develop an approach based on similar properties for random functions. The main aim of this research is to develop new and to improve existing techniques for evaluation of the approximation accuracy for numerical analysis of random functions. Simulation techniques are based on discretized data. The second aim is to apply the above approximation results to stochastic modeling with controlled committed errors to obtain computational tools with exact evaluated bounds for the errors and for a control of the relevance of the obtained results to an original continuous (analog) model. The main application areas for the above results are financial mathematics, signal processing , and earth sciences.

Stochastic modeling for large databases with uncertain data

A vast growing part of data stored in modern databases are uncertain due to essential randomness (e.g., in bioinformatics, medicine, telecommunication, economics) or due to incorrect, missing and noisy data (e.g., official databases). At the same time we need to use these data as well. This leads to qualitatively new interdisciplinary problems both for statistics and for computer science. Database systems are developed for correct data models and new tools are needed to retrieve and evaluate information in these uncertain data sets. This research aims at developing new statistical models and techniques for database computations with uncertain data. These results will give methods for database researchers and designers to elaborate integrated database systems to work with such data. They can also be useful for related problems in bioinformatics, data mining and environmental and health sciences. The second aim is to study related problems of quantization, compression, and approximation of random signals in databases. The results can be applied, for example, to various problems in telecommunication and multimedia databases, financial mathematics, data mining and bioinformatics.

Quantization of random processes and sequences

Quantization in digital signal processing means discretization of signals and was intensively studied last decades. Nowadays the interest to quantization problems is renewed due to the wide spread of various types of sensors and increased performance requirements. We consider scalar quantization of continuous-valued random signals and related statistical problems in average case setting. A general approach is proposed for quantization and coding/compression of realizations of random processes and sequences. The goal of data compression is to reduce the data significantly while keeping the essential information in the data (signals, images) that will be necessary for a given application (for example, run-length encoding (RLE)). Asymptotic properties and structure of an additive noise model are studied for uniform and non-uniform quantizers. The main distinction from the conventional methods is that the correlation structure of the random process is exploited to evaluate the quantization rate (or memory capacity) needed for coding and archiving process realizations in average.