Feeds:
Posts
Comments

Posts Tagged ‘probability theory’

Real-world computers make mistakes, in the sense that once in a while an instruction is executed incorrectly, perhaps because of a corrupted disk. One could naively think that, given, a maximum acceptable probability of an incorrect final result, this would impose a bound on the complexity of possible computation or require an exponential number of repetitions. However (and similarly to the central result of Shannon’s information theory), one can do much better as was explained by Péter Gàcs in his mini-course in Marseilles. P. Gàcs slides can be found here.

Computers are modelized as probabilistic cellular automata: the new states (indexed by \mathbb Z^d are independent conditioned on the old states and each follows a law which is a fixed function of the old states in a neighborhood.  These local transitions are assumed to be “noisy”, i.e., all states have positive probability.

Remark. This “noisiness” does not imply ergodicity (in the sense of Markov chains, i.e., there is a unique stationary probability measure), which is fortunate since ergodicity implies that the initial data is eventually forgotten!

Question. When d=2, the voting model is expected to be non-ergodic but there is a proof only for a continuous time version with specific parameters that can be related to the Ising model.

It is observed that one-dimensional cellular automata cannot compute reliably in the presence of noise. In a way, there is not enough long range communication for cells on the boundary of an erroneous island to tell on which side is the island… The main result of the first lecture was the following:

Theorem (3D-simulation with infinite redundancy). Let U be some one-dimensional cellular automaton. Then there is a 3-dimensional cellular automaton V and z constant C such that, if the local transitions are noisy but with sufficiently small error probability \epsilon, the probability that a given V-state at site (i,j,k) at time n is different from the U-state at site i at the same time is bounded by C\epsilon.

There is a version of this result with finite redundancy. Specifically, for a computation which requires a space S and a time T and a maximal error probability at a given site of $\delta>0$, one can replace the infinite extension \mathbb Z\times\mathbb Z^2 by a finite one \{0,\dots,S\}\times \times\{0,\dots,N\}^2 where N=\mathcal O(\log(ST)/\delta).

The proof relies on a decomposition of the occurence of the “faults” in a hierarchical structures (at level 0, one has only distant single faults, at level 1, one also allows more distant small balls containing faults, etc.).

The second lecture, dealing with reliable computations in 2D, will be reviewed in the following post.

Advertisements

Read Full Post »

Yann OLLIVIER gave a talk on his interpretation of Ricci curvature which sheds much light on this classical notion and allows its generalization, e.g., to discrete spaces.

He is interested in the rôle played by positive Ricci curvature in the concentration of the measure phenomenon discovered by Gromov in his generalization of Lévy’s theorem on 1-Lipschitz real functions on the unit N-sphere: if f:\mathbb S^N\to \mathbb R is a 1-Lipschitz function then, for all t\geq0:

\nu(\{x\in\mathbb S^N: |f(x)-\nu(f)|\geq t\}) \leq 2\exp -t^2/2D^2

where \nu is the natural measure on the sphere and D=1/\sqrt{N-1}] is called the “observable diameter”.

He presented several striking applications (with coworkers) to some Markov chains, like the classical dynamics of the Ising model, as well as to the spectral radius of the Laplacian of a compact Riemannian manifold.

For a Riemannian manifold, the Ricci curvature Ric(v) along v\in T_xM is defined by:

\int_{T^1_xM} d(\exp_x(\epsilon w),\exp_y(\epsilon w')) \, dw) = d(x,y) \left(1-Ric(v)\frac{\epsilon^2}{2N}+\mathcal O(\epsilon^3)\right)

where y=\exp_x(v), w' is the parallel transport of w.

Thus positive Ricci curvature implies that “small balls are closer than their centers”.

This point of view generalizes to arbitrary Polish spaces X endowed with local measures B_x, x\in X. For any pair of points x,y\in X, the Ricci curvature is

d(B_x,B_y)=d(x,y)(1-Ric(x,y))

where d(\cdot,\cdot) in the left hand side is Wasserstein L^1 distance:

d(B_x,B_y):=\inf_\xi \int_{X\times X} d(x,y) \, \xi(dxdy) = \sup B_x(f)-B_y(f)

with \xi ranging over the couplings of B_x,B_y and f ranging over the 1-Lipschitz functions.

A positive Ricci curvature space is a space such that \inf_{x\ne y}Ric(x,y)>0.

Example. \{0,1\}^N with the geodesic distance from the underlying graph has postive Ricci curvature .

This applies to Markov chains with positive curvature, including the Ising model at sufficiently high temperature (higher than the critical temperature, maybe strictly higher). Some remarks of Dobrushin from the seventies may be rephrased in this geometric language.

It also allows estimating the spectral gap for some compact Riemannian manifolds. In particular, it gives a strengthening of Lichnérowiciz theorem in the case of variable curvature.

Read Full Post »