• A
  • A
  • A
  • АБB
  • АБB
  • АБB
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта

Математический семинар ФКН "When Normality Meets Automata: Dimension, Selection, and Randomization"

13 февраля 2026 в 18:10

Докладчик: Субин Пулари, ФКН НИУ ВШЭ

Тема: When Normality Meets Automata: Dimension, Selection, and Randomization

Аннотация: 

Finite-state dimension—a finite-state analogue of classical Hausdorff dimension—quantifies the density of information in an infinite sequence over a finite alphabet, as measured using finite-state automata. The Schnorr–Stimm dichotomy theorem (1972) gives a key connection between dimension and normality: an infinite sequence has finite-state dimension 1 if and only if it is (Borel) normal.

Many classical results about normal sequences—i.e., sequences having finite-state dimension 1—have been shown to be special cases of more general theorems concerning finite-state dimension. In 2012, Jack Lutz proposed the Normality–Dimension Thesis: every theorem about Borel normality is a special case of a theorem about finite-state dimension. In this talk, we revisit the thesis and review its validity in the context of known results involving finite-state dimension, including very recent progress in joint work with Bienvenu, Gimbert, and Nandakumar. We conclude this part by highlighting a broader program of questions and promising directions for further testing—and potentially extending—the Normality–Dimension Thesis.

Complementing this dimension-theoretic perspective, I will ask what changes when the automaton model itself is enriched: does giving a finite automaton access to internal randomness make it better at exploiting a normal sequence? In joint work with Bienvenu and Gimbert, we show that allowing internal randomness inside a finite automaton does not make it better at exploiting a normal sequence: two classical automata-theoretic characterizations of normality—Agafonov’s subsequence-selection theorem and the Schnorr–Stimm finite-state gambling theorem—extend from deterministic finite automata to probabilistic finite automata, even when transition probabilities are arbitrary reals. Informally, even if the machine can flip coins while reading the stream, it still cannot systematically select a “less random” subsequence or turn normal digits into profit; normality remains indistinguishable to such bounded-memory predictors. We end by discussing the next natural question raised by these results: where is the computational threshold beyond which randomness inside the model does begin to add power?

Место проведения: Покровский бульвар 11, аудитория R306.

Регистрация

Добавить в календарь