Feeds:
Posts
Comments

Archive for the ‘Probability and Statistics’ Category

Yann OLLIVIER gave a talk on his interpretation of Ricci curvature which sheds much light on this classical notion and allows its generalization, e.g., to discrete spaces.

He is interested in the rôle played by positive Ricci curvature in the concentration of the measure phenomenon discovered by Gromov in his generalization of Lévy’s theorem on 1-Lipschitz real functions on the unit N-sphere: if f:\mathbb S^N\to \mathbb R is a 1-Lipschitz function then, for all t\geq0:

\nu(\{x\in\mathbb S^N: |f(x)-\nu(f)|\geq t\}) \leq 2\exp -t^2/2D^2

where \nu is the natural measure on the sphere and D=1/\sqrt{N-1}] is called the “observable diameter”.

He presented several striking applications (with coworkers) to some Markov chains, like the classical dynamics of the Ising model, as well as to the spectral radius of the Laplacian of a compact Riemannian manifold.

For a Riemannian manifold, the Ricci curvature Ric(v) along v\in T_xM is defined by:

\int_{T^1_xM} d(\exp_x(\epsilon w),\exp_y(\epsilon w')) \, dw) = d(x,y) \left(1-Ric(v)\frac{\epsilon^2}{2N}+\mathcal O(\epsilon^3)\right)

where y=\exp_x(v), w' is the parallel transport of w.

Thus positive Ricci curvature implies that “small balls are closer than their centers”.

This point of view generalizes to arbitrary Polish spaces X endowed with local measures B_x, x\in X. For any pair of points x,y\in X, the Ricci curvature is

d(B_x,B_y)=d(x,y)(1-Ric(x,y))

where d(\cdot,\cdot) in the left hand side is Wasserstein L^1 distance:

d(B_x,B_y):=\inf_\xi \int_{X\times X} d(x,y) \, \xi(dxdy) = \sup B_x(f)-B_y(f)

with \xi ranging over the couplings of B_x,B_y and f ranging over the 1-Lipschitz functions.

A positive Ricci curvature space is a space such that \inf_{x\ne y}Ric(x,y)>0.

Example. \{0,1\}^N with the geodesic distance from the underlying graph has postive Ricci curvature .

This applies to Markov chains with positive curvature, including the Ising model at sufficiently high temperature (higher than the critical temperature, maybe strictly higher). Some remarks of Dobrushin from the seventies may be rephrased in this geometric language.

It also allows estimating the spectral gap for some compact Riemannian manifolds. In particular, it gives a strengthening of Lichnérowiciz theorem in the case of variable curvature.

Advertisements

Read Full Post »

The cumulative distribution function of a random variable X=(X_1,\dots,X_N) is: F(x_1,\dots,x_N):=\mathbb P(X_1\leq x_1,\dots,X_N\leq x_N).

The copula of X is C:[0,1]^N\to[0,1] such that: F(x_1,\dots,x_N)=C(F_1(x_1),\dots, F_N(X_N)) where F, resp. F_i, is the distribution function of X, resp. X_i. It is unique if each variable X_i is continuous (atomless law).

Theorem (Sklar). A function C:[0,1]^N\to[0,1] is the copula of some random variable with values in \mathbb R^N if and only if the following properties are satisfied for all i: (i) C(x_1,\dots,x_{i-1},0,x_{i+1},\dots,u_N)=0; (ii) C(1,\dots,1,x_i,1,\dots,1)=u_i; (iii) \sum_{t\in\{1,2\}^N} (-1)^{\sum_i t_j} C(x_1^{t_1},\dots,x_N^{t_N})\geq 0 where x_i^1\leq x_i^2 for all.

Remark. A function C:[0,1]^N\to[0,1] is a copula iff it is the (restriction to [0,1]^N of the) repartition function of a random vector (U_1,\dots,U_N) where each U_i is uniform over [0,1].

(X_1,\dots,X_N) is independent iff it admits the copula C(u_1,\dots,u_N)=u_1\dots u_N.

For an uniform random variable U, the vector (U,\dots,U) admits the copula C(u_1,\dots,u_N)=\min(u_1,\dots,u_N).

Fréchet bounds: Any copula C satisfies:

\left(\sum_i u_i-n+1\right)^+ \leq C(u_1,\dots,u_N) \leq \min(u_1,\dots,u_N)

The left hand side is itself a copula only for N=2.

The restriction of a copula to fewer variables is again a copula.There is no known general way to extend a copula to more variables.

See wikipedia.

Read Full Post »