Feeds:
Posts
Comments

Posts Tagged ‘computation’

Real-world computers make mistakes, in the sense that once in a while an instruction is executed incorrectly, perhaps because of a corrupted disk. One could naively think that, given, a maximum acceptable probability of an incorrect final result, this would impose a bound on the complexity of possible computation or require an exponential number of repetitions. However (and similarly to the central result of Shannon’s information theory), one can do much better as was explained by Péter Gàcs in his mini-course in Marseilles. P. Gàcs slides can be found here.

Computers are modelized as probabilistic cellular automata: the new states (indexed by \mathbb Z^d are independent conditioned on the old states and each follows a law which is a fixed function of the old states in a neighborhood.  These local transitions are assumed to be “noisy”, i.e., all states have positive probability.

Remark. This “noisiness” does not imply ergodicity (in the sense of Markov chains, i.e., there is a unique stationary probability measure), which is fortunate since ergodicity implies that the initial data is eventually forgotten!

Question. When d=2, the voting model is expected to be non-ergodic but there is a proof only for a continuous time version with specific parameters that can be related to the Ising model.

It is observed that one-dimensional cellular automata cannot compute reliably in the presence of noise. In a way, there is not enough long range communication for cells on the boundary of an erroneous island to tell on which side is the island… The main result of the first lecture was the following:

Theorem (3D-simulation with infinite redundancy). Let U be some one-dimensional cellular automaton. Then there is a 3-dimensional cellular automaton V and z constant C such that, if the local transitions are noisy but with sufficiently small error probability \epsilon, the probability that a given V-state at site (i,j,k) at time n is different from the U-state at site i at the same time is bounded by C\epsilon.

There is a version of this result with finite redundancy. Specifically, for a computation which requires a space S and a time T and a maximal error probability at a given site of $\delta>0$, one can replace the infinite extension \mathbb Z\times\mathbb Z^2 by a finite one \{0,\dots,S\}\times \times\{0,\dots,N\}^2 where N=\mathcal O(\log(ST)/\delta).

The proof relies on a decomposition of the occurence of the “faults” in a hierarchical structures (at level 0, one has only distant single faults, at level 1, one also allows more distant small balls containing faults, etc.).

The second lecture, dealing with reliable computations in 2D, will be reviewed in the following post.

Read Full Post »