Scholarly article on topic 'Two stochastic models of a random walk in the U(n)-spherical duals of U(n + 1)'

Two stochastic models of a random walk in the U(n)-spherical duals of U(n + 1) Academic research paper on "Physical sciences"

0
0
Share paper
OECD Field of science
Keywords
{""}

Academic research paper on topic "Two stochastic models of a random walk in the U(n)-spherical duals of U(n + 1)"

Annali di Matematica (2013) 192:447-473 DOI 10.1007/s10231-011-0232-z

Two stochastic models of a random walk in the U(n)-spherical duals of U(n + 1)

F. A. Grunbaum • I. Pacharoni • J. Tirao

Received: 6 September 2011/ Accepted: 2 November 2011 / Published online: 18 November 2011 © Fondazione Annali di Matematica Pura ed Applicata and Springer-Verlag 2011

Abstract The random walk to be considered takes place in the S-spherical dual of the group U(n + 1), for a fixed finite dimensional irreducible representation S of U(n). The transition matrix comes from the three-term recursion relation satisfied by a sequence of matrix valued orthogonal polynomials built up from the irreducible spherical functions of type S of SU(n + 1). One of the stochastic models is an urn model and the other is a Young diagram model.

Keywords Matrix valued spherical functions • Matrix orthogonal polynomials • Random walks • Urn model • Young diagram model

Mathematics Subject Classification (2000) 22E45 • 33C45 • 60G99 • 60J35

1 Introduction

Around 1770 D. Bernoulli studied a model for the exchange of heat between two bodies. This model can also be seen as a description of the diffusion of a pair of incompressible gases between two containers. This model was independently analyzed by S. Laplace around 1810, see the references in [2]. Another model of similar characteristics was introduced by P. and T. Ehrenfest in 1907 in connection with the controversies surrounding the work of L. Boltzmann in the kinetic theory of gases dealing with reversibility and convergence to equilibrium. Boltzmann had apparentlly deduced his H-theorem dictating convergence to

F. A. Grünbaum

Department of Mathematics, University of California, Berkeley, CA 94705, USA e-mail: grunbaum@math.berkeley.edu

I. Pacharoni • J. Tirao (B)

CIEM-FaMAF, Universidad Nacional de Córdoba, 5000 Córdoba, Argentina e-mail: tirao@famaf.unc.edu.ar

I. Pacharoni

e-mail: pacharon@famaf.unc.edu.ar

equilibrium starting from the time reversible equations of Newton. For a nice account of this, see [8]. Both of these models are instances of discrete time Markov chains with fairly explicit tridiagonal one-step transition probability matrices which are obtained by considering carefully the underlying stochastic mechanism that connects the state of the system at two consecutive values of time.

The second model features two urns, I and II, that share a total of N balls. The state of the system at time n is the number of balls in urn I. Each ball has a different label from the set 1, 2,..., N .At time n, a number j in the set 1, 2,..., N is chosen with equal probabilities and the ball with this label is moved from the urn where it sits to the other urn. This gives the state of the system at time n + 1. Writing down the one-step transition probability matrix is now a matter of counting carefully.

While it had been possible to obtain interesting answers for these two models for quite some time, it is only much more recently that some very nice connections have been noticed between these models and some basic sets of discrete orthogonal polynomials, namely the Krawtchouk and the dual Hahn polynomials. Moreover, although there are many ways of arriving at these polynomials, it is relevant to mention here that they can be realized as the "spherical functions" for certain finite bihomogeneous spaces. A very good reference for this material is [16]. We stress the remarkable fact that these two models of old vintage and clear physical significance can be solved in terms of the simplest of all hypergeometric functions, namely 2 F1 and 3 F2.

As many readers certainly know, many of the classical special functions of mathematical physics, such as the Legendre, the Hermite and the Laguerre polynomials, could have been obtained for the first time as spherical functions for certain symmetric spaces. A good basic reference here is [19]. The way that things developed historically is, of course, completely different.

The interplay between important physical problems and certain tools that arise naturally in group representation theory constitutes the theme of this paper. The situation described here is the reverse of what has been discussed above for the Bernoulli-Laplace and the Ehrenfest models: we will go from group representation theory to some concrete models that might be of some physical interest. We will start from a matrix that is obtained from group representation theory and try to build a model that goes along with it. The models constructed here are certainly not the only possible ones. More natural ones might be lurking around.

In a series of papers including [3-6,10-14,17,18], one considers matrix valued spherical functions associated with a pair (G, K) arriving at sequences of matrix valued polynomials of one real variable satisfying a three-term recursion relation whose semi-infinite block tri-diagonal matrix is stochastic, i.e. the entries are non-negative and the sum of the elements in any row is 1. This matrix depends on a number of free parameters that have a very definite group theoretic meaning. The important point is that the tools developed in the papers just mentioned allow one to give explicit expressions, in terms of some definite integrals, of all the entries of any power of the original matrix. This means that if one could think of a nice Markov chain with this matrix as its one-step transition probability matrix, one would have an explicit form for the entries of the n-step transition probability matrix. Many readers will recognize that this is exactly what Karlin andMcGregor, see [9], proposed as a way of exploiting orthogonal polynomials and the role they play in the spectral analysis of certain finite or semi-infinite tridiagonal matrices. The method advocated in [9] starts with a so-called birth-and-death process whose one-step tridiagonal transition matrix is easily constructed from the given model and one has to look for the corresponding spectral information: the eigenfunctions and the spectral measure. Here, we travel this road in the opposite direction in a more elaborate set-up.

Fig. 1 U (n + 1)(k), n = 1, ki = 3

The relation between matrix valued orthogonal polynomials, block tridiagonal matrices, and Quasi-Birth and Death processes has been first exploited independently in [1,7] as well as in later papers by these authors.

We will consider several random walks whose configuration spaces are subsets of U(n + 1)(k), the so-called k-spherical dual of U(n + 1), and whose one-step transition matrices come from the stochastic matrix that appears in [10,15], see also [14]. The dual of U(n + 1) is the set U(n + 1) of all equivalence classes of finite dimensional irreducible representations of U(n + 1). These equivalence classes are parametrized by the n + 1-tuples of integers m = (m 1,..., mn+{) subject to the conditions m1 > ••• > mn+1.

If k = (k1, ••• , kn) e U(n), the k-spherical dual of U(n + 1) is the subset U(n + 1)(k) of U(n + 1) of the representations of U(n + 1) whose restriction to U(n) contains the representation k. Then it is well known, see [19], that U(n + 1)(k) corresponds to the set of all m s as above that satisfy the extra constraints

mi > ki > mi+1, for all i = 1,..., n. (1)

In other words, U(n + 1)(k) can be visualized as the subset of all points m of the integral lattice Z"+1 in the set

[k1, ro) x [k2, k1] x ••• x [kn-1, kn] x (—ro, kn].

An example is given in the figure above (Fig. 1).

We can now state more precisely the point of this paper: starting from the stochastic matrix M that appears in [10,15], we describe a random mechanism that gives rise to a Markov chain whose state space is the subset of U(n + 1)(k) of all m e U(n + 1)(k) such that sm = Sk and kn > 0 (sm = m1 + ••• + mn+1, Sk = k1 + ••• + kn), and whose one-step transition matrix coincides with the one we started from. The construction in [3,12] deals with the case of (SU(3), U(2)) but in [10,13] this was extended to the case of (SU(n + 1), U(n)).

One step of the Markov evolution will consist of two substeps taken in succession. In the first substep, one of the values of mi increases by one, subject to the constraints (1). In the second substep, one of the new values of our mi's decreases by one, again this is subject to the same constraints. Thus, from the configuration m, one could for instance go to m — ei + ej or one could stay put at m. We use the notation ei for the vector with its ith component equal to 1 and all the others equal to 0. Any state has a total of at most n(n + 1) + 1 positions where it can move in one complete step of our process consisting of two simpler steps. It should be kept in mind that the two successive simpler steps can end up with our random walker in the initial state. We will analyze in detail the simpler substeps that constitute one full step of our process. This will take up most of the analysis in the next sections.

We now describe the contents of the paper.

In Sect. 2, we collect the necessary material to state and explain a three-term recursion relation (with matrix coefficients) for a sequence of matrix valued orthogonal polynomiSpringer

Fig. 2 m = (6, 4, 4, 3)

als, built up from irreducible spherical functions of a fixed type associated with the pair (SU(n + 1), U(n)). This should help the reader make the connection between [10,12] and the present paper.

In Sect. 3, we construct a factorization of the stochastic matrix that defines the three-term recursion relation for the sequence of matrix valued orthogonal polynomials given in the previous section. This factorization into two stochastic matrices leads to the two substeps mentioned above.

Before starting the analysis of our general urn model in Sect. 5 for one of the substeps, we describe in detail in Sect. 4 an urn model for n = 2.

The definition of the stochastic matrix M alluded above, as well as its factorization, makes sense for any m e U (n + 1)(k).

To each configuration m\ > m2 > ••• > mn > 0of n integer numbers, we associate its Young diagram, a combinatorial object which has m 1 boxes in the first row, m2 boxes in the second row, and so on down to the last row which has m n boxes. For example, the Young diagram associated with the configuration 6 > 4 > 4 > 3 is given in Fig. 2.

Young diagrams and their relatives the Young tableaux are very useful in representation theory. They provide a convenient way to describe the group representations of the symmetric and general linear groups and to study their properties. In particular, Young diagrams are in one-to-one correspondence with the irreducible representations of the symmetric group over the complex numbers and the irreducible polynomial representations of the general linear groups. They were introduced by Alfred Young in 1900. They were then applied to the study of the symmetric group by Georg Frobenius in 1903. Their theory and applications were further developed by many mathematicians and there are numerous and interesting applications, beyond representation theory, in combinatorics and algebraic geometry.

If we consider the subset all m e U (n + 1)(k) such that mn+\ > 0, it is natural to represent such a state of our Markov chain by its Young diagram, see Sect. 6. Then in the last two sections, we describe a random mechanism based on Young diagrams that gives rise to a random walk in the set of all Young diagrams of 2n + 1 rows and whose 2 j row has kj boxes 1 < j < n, and whose transition matrix is M1, see (31).

2 Spherical functions of (SU(« + 1), U(«))

Let G be a locally compact unimodular group, and let K be a compact subgroup of G. Let K denote the set of all equivalence classes of complex finite dimensional irreducible representations of K; for each S e K, let denote the character of S, d(S) the degree of S, i.e., the dimension of any representation in the S, and xs = d(S)£s. We choose the Haar measure dk on K normalized by K dk = 1. We shall denote by V a finite dimensional vector space over the field C of complex numbers and by End(V) the space of all linear transformations of V into V.

A spherical function ® on G of type S e K is a continuous function on G with values in End (V) such that

i) ®(e) = I. (I = identity transformation).

ii) ®(x)®(y) = JK xs(k—l)®(xky) dk, for all x, y e G.

If ® : G —> End (V) is a spherical function of type S then ®(kgk') = ®(k)®(g)®(k'), for all k, k' e K, g e G, and k ^ ® (k) is a representation of K such that any irreducible subrepresentation belongs to S.

Spherical functions of type S arise in a natural way upon considering representations of G.If g ^ U(g) is a continuous representation of G, say on a finite dimensional vector space E,then

P(S) = j Xs(k—1)U(k) dk K

is aprojection of E onto P (S)E = E (S). The function ® : G —> End(E (S)) defined by

®(g)a = P(S)U(g)a, g e G, a e E(S) is a spherical function of type S. In fact, if a e E (S), we have

®(x)®(y)a = P (S)U (x) P (S)U (y)a = J xs(k—1) P (S)U (x)U (k)U (y)a dk

Xs(k—1)®(xky) dk^ a.

If the representation g ^ U (g) is irreducible, then the associated spherical function ® is also irreducible. Conversely, any irreducible spherical function on a compact group G arises in this way from a finite dimensional irreducible representation of G.

The aim of this section is to collect the necessary material to state and explain a three-term recursion relation for a sequence of matrix valued orthogonal polynomials, built up from irreducible spherical functions of the same type associated with the pair (SU(n + 1), S(U(n) x U(1))).

The irreducible finite dimensional representations of SU(n + 1) are restriction of irreducible representations of U(n + 1), which are parameterized by (n + 1)-tuples of integers

m = (m1, m2,..., mn+1)

such that m 1 > m2 > ••• > mn+1.

Different representations of U(n + 1) can restrict to the same representation of G = SU(n + 1). In fact the representations m and p of U(n + 1) restrict to the same representation of SU(n + 1) if and only if mi = pi + j for all i = 1,...,n + 1 and some j e Z.

The closed subgroup K = S(U(n) x U(1)) of G is isomorphic to U(n); hence, its finite dimensional irreducible representations are parameterized by the n-tuples of integers

k = (k1, k2,..., kn)

subject to the conditions k1 > k2 > ••• > kn.

Let k be an irreducible finite dimensional representation of U(n). Then k is a subrepre-sentation of m if and only if the coefficients ki satisfy the interlacing property

mi > ki > mi+1, for all i = 1,..., n.

Moreover, if k is a subrepresentation of m, it appears only once. (See [20]).

The representation space Vk of k is a subspace of the representation space Vm of m and it is also K-stable. In fact, if A e U(n), a = (det A)-1 and v e Vk, we have

(A a")-v = a ft A ?)-v = a'm - (A ?) • v.

where sm = m 1 + • • • + mn+1 and sk = kx + • • • + kn. This means that the representation of K on Vk obtained from m by restriction is parameterized by

(k1 + sk - sm ,...,kn + sk - sm). (2)

Let ®m,k be the spherical function associated with the representation m of G and to the subrepresentation k of K. Then (2) says that the K-type of ®m,k is k + (sk — sm)(1,..., 1).

Proposition 2.1 The spherical functions ®m,k and ®m 'k of the pair (G, K) are equivalent if and only if m' = m + j (1,..., 1) and k' = k + j (1,..., 1).

Proof The spherical functions ®m,k and ®m ,k are equivalent if and only if m and m' are equivalent and the K-types of both spherical functions are the same, see the discussion in p. 85 of [17]. We know that m ~ m' if and only if

m' = m + j (1,..., 1) for some j e Z.

Besides, the K types are the same if and only if

ki + sk — sm = ki' + sk' — sm' for all i = 1,...,n.

Therefore, k' = k + p(1, ..., 1), and now it is easy to see that p = j. □

The standard representation of U(n + 1) on C"+1 is irreducible and its highest weight is (1, 0,..., 0). Similarly, the representation of U(n + 1) on the dual of C"+1 is irreducible and its highest weight is (0,...,0, —1). Therefore, we have that

Cn+1 = V(1,0, • • • ,0) and (C"+V = V(0,...,0,-1).

For any irreducible representation m of U(n + 1), the tensor product Vm ® C"+1 decomposes as a direct sum of U(n + 1)-irreducible representations in the following way

Vm ® C"+1 ~ Vm+d ® Vm+e2 ® • • • ® Vm+en+1, (3)

Vm ® (C^)* ~ Vm-e! ® Vm-e2 ® • • • ® Vm-en+1, (4)

where {e1,..., en+1} is the cannonical basis of Cn+1, see [20].

Remark The irreducible modules on the right-hand side of (3) and (4) whose parameters (m'1, m'2, ..., m'n+1) do not satisfy the conditions m 1' > m2 > • • • > mn+1' have to be omitted.

Starting from (3) and (4), the following theorem is proved in [10].

Theorem 2.2 Let 0 and ^ be, respectively, the one-dimensional spherical functions associated with the standard representation ofG and its dual. Then

0(g)®m'k(g) = Xa?(m, k)®m+eik(g)

i=1 n + 1

^(g)®m'k(g) = Xb?(m, k)®m-eik(g).

Sm,k g =

The constants ai (m, k) and bi (m, k) are given by

ai (m, k) = bi (m, k) =

YTj=1(kj - mi - j + i - 1)

Uj =i (mj - mi - j +i)

n"=1 (kj - mi - j + i)

Uj=i(mj - mi - j +i)

Moreover

n+1 n+1

¿a? (m, k) = £ b2(m, k) = 1. (6)

i=1 i=1

Our Lie group G has the following polar decomposition G = KAK, where the abelian subgroup A of G consists of all matrices of the form

/ cos 6 0 sin 6 \ a = | 0 In-1 0 | ,6 e R. (7)

sin 6 0 cos 6/

(Here, In-1 denotes the identity matrix of size n - 1).

Since an irreducible spherical function ® of G of type S satisfies Q(kgk') = ®(k)®(g) ®(k') for all k, k' e K and g e G, and ®(k) is an irreducible representation of K in the class S, it follows that ® is determined by its restriction to A and its K-type. Hence, from now on, we shall consider its restriction to A.

Let M be the group consisting of all elements of the form

I1 0 0\

m = | 0 B 0 | , B e U(n - 1).

V° 0 V

Thus, M is isomorphic to U(n - 1) and its finite dimensional irreducible representations are parameterized by the (n - 1)-tuples of integers

t = (tx, t2,..., tn-1)

such that t1 > t2 > • • • > tn-1.

If a e A, then ®m,k(a) commutes with ®m,k (m) for all m e M .In fact, we have

®m,k (a)$m,k (m) = ®m,k(am) = $>m,k(ma) = ®mk(m)®mk (a).

The representation of U(n) in Vk C Vm, k = (k1,...,kn) restricted to U(n - 1) decomposes as the following direct sum

Vk = 0 Vt, (8)

where the sum is over all the representations t = (t1,..., tn—1) e M such that the coefficients of t interlace the coefficients of k, that is ki > ti > ki+1, for all i = 1,...,n — 1. Since each Vt C Vk appears only once, by Schur's Lemma, it follows that ®m'k(a)|yt = 0t"k (a)Id|vt, where ^ ^(a) e C for all a e A.

By using Proposition 2.1, given a spherical function Omk we can assume that Sk—sm = 0. In such a case, the K-type of Omk is k, see (2). Now it is easy to see that if (m, k) is one of such a pair then

m = m(w, r) = (w + k1, n + k2, ... , rn—1 + kn, — (w + r1 +-----+ rn—1)), (9)

where 0 < w, kn > — (w + r1 + ••• + rn—1) and 0 < ri < ki — ki+1 for i = 1,...n — 1. Thus if we assume w > max{0, — kn} and 0 < ri < ki — ki+1 for i = 1,...n — 1 all the conditions are satisfied.

We observe that the representations t of M appearing in the right-hand side of (8) are of the form t = r + k', where k' = (k2,...,kn) and r is in the following set

fi = {r = (r1,..., r„—1) : 0 < ri < ki — ki+1}.

In particular, the number of M-modules in the decomposition of Vk is

N = fl (ki — ki+1 + 1).

We will identify ®m,k (a) with the column vector (Oj!1^(a))refi of N complex valued functions ®m,k(a) indexed by fi, where Oj!1^(a) = ^m+k' (a), a e A.

From now on, we fix k e K and take m = m(w, r) as in (9) for all w > max{0, — kn} and r e fi. Also in the open subset {a(9) e A : 0 < 0 < n/2} of A, we introduce the coordinate t = cos2(0) and define on the open interval (0, 1) the complex valued function Fr s(w, t) = Om(w,r),k(a(0)) and the corresponding matrix function

F(w, t) = (Fr,s(w, t))(r,s) efixfi-

For each w > max{0, —kn}, we also define the following matrices of type fi x fi

Aw — ^-^-w^sX Bw — ((Bw)r'S)' Cw — ((CW)r'S)'

(Aw)r,s —

^w^s —

(Bw)r,s —

a2+1(m(w, r))fe2(m(w, r) + e„+1) aj+1(m(w, r))fe2(m(w, r) + ej+1) 0

a2(m(w, r))b:n+1(m(w, r) + e1)) a2(m(w, r))b-2+1(m(w, r) + e1) 0

a2- (m(w, r))b2j (m(w, r) + ej))

1<j<n+1

ajJ+1(m(w, r))bJ+l(m(W' r) + ej+1) aJ+l(m(W' r))b^+1(m(w, r) + e„+1) ajJ+1(m(w, r))b2+1(m(w, r) + ej+1) 0

if s = r

if s = r + e j otherwise

if s = r if s = r — e j otherwise

if s r

if s = r + e j if s = r — e j if s = r + e j — ei otherwise

where 1 < i, j < n - 1, andaf(m(w, r)) = af (m, k), b2(m(w, r) + ej) = b2((m + ej, k)) for 1 < i, j < n + 1, see (5).

Theorem 2.3 For each fixed K-type k = (k1,... kn), for all integers w > max{0, - kn} and all 0 < t < 1 we have

tF(w, t) = AwF(w - 1, t) + BwF(w, t) + CwF(w + 1, t). (11)

Proof This result is a consequence of Theorem 2.2 and of the appropriate definitions of Aw, Bw, Cw given in (10), when we take g = a (6).

We recall that 0 (g) and ^ (g) are the one-dimensional spherical functions associated with the G-modules Cn+1 and (Cn+1)*, respectively. A direct computation gives

0(a(6)) = (a(6)en+1, e„+1> = cos6.

a(6)) = (a(6)Xn+1, Xn+1> = cos6. Then 0(a(6))fi(a(6)) = cos2(6) = t. □

If g e G = SU(n + 1) let A(g) denote the n x n left upper corner of g, and let A be the dense open subset of all g e G such that A(g) is nonsingular. In [13] in order to determine all irreducible spherical functions of G of type k = (k1,...,kn), an auxiliary function ®k : A —> End(Vk) is introduced. It is defined by ®k(g) = n(A(g)) where n stands for the unique holomorphic representation of GL(n, C) corresponding to the parameter k. It turns out that if kn > 0 then ®k = ®m,k where m = (k1,...,kn, 0).

Then instead of looking at a general spherical function ®w,r = $m(w,r),k of type k, we look at the function Hw,r (g) = $w,r(g)$k(g)-1 which is well defined on A. As before, we construct the matrix function

H (w, t) = (Hr,s(w, t))(r,s)

where Hr,s(w, t) = Hsw,r(a(6)), t = cos6 e (0, 1).

Let V(t) = (^r,s(t))(r,s)efixfi be the transpose of H(0, t), i.e. ^r,s(t) = Hs,r(0, t). In [13], the following crucial theorem is proved.

Theorem 2.4 Ifkn > 0, then Hr s(w, t), H(w, t) and

Pw(t) = H(w, t)V(t)-1

are polynomial functions on the variable t whose degrees are

deg Hr,s(w, t) = w + X min{ri, si},

i =1 (12) deg H(w, t) = w + k1 - kn,

deg Pw(t) = w.

It is important to point out that {Pw }w>0 is a sequence of matrix orthogonal polynomials with respect to a matrix weight function W = W(t) supported in the interval (0, 1) and given in [13]. From (11), it easily follows that {Pw}w>0 satisfies the following three-term recursion relation

tPw(t) = Aw Pw-1

(t) + Bw Pw(t) + Cw Pw+1(t). (13)

The above three-term recursion relation which hold for all w > 0 can be written in the following way

P0 Be C0 0

P1 A1 B1 C1 0

P2 = 0 A2 B2 C2

P3 0 A3 B3

Now we observe that the semi-infinite matrix M on the right-hand side is a stochastic matrix, that is, all the entries are nonnegative and the sum of the elements in any row is one. In fact, the elements in the r row of the w blocks are either zero or (Aw)r s, (Bw)rs, (Cw)rs which are given in (10). Their sum is

X](Aw)r,s + (Bw)r,s + (Cw)r,s = a2+1(m)b1 (m + e„+1) + ^aj(m)b2(m + ej) sefi j =2

+ X aj (m)b2 (m + ej) + ^ aj (m^+^m + ej)

j=1 j=2 n

+ a^ (m) ^ b2 (m + e„+1)

+ X a2 (m)b2(m + ej) + a2 (m^+^m + e1)

2<i = j <n

+ a2(m)^ bJJ (m + e^,

where we replaced m(w, r) by m. The right-hand side can be rewritten to obtain

n+1 n n+1

X(Aw)r,s + (Bw)r,s + (Cw)r,s = a^m) ^ bJJ(m + e„+1)+^ a2(m) ^ b2(m + ej) sefi j=1 j=2 i=1

n+1 n+1 n+1

+ a2 (m)^ b2+1(m + e1) — ^ a2 (m) ^ b2(m + e j).

j=1 j=1 i=1

Now by using (6) the assertion

^^(Aw)r,s + (Bw)r,s + (Cw)r,s = 1 sefi

follows, proving that the semi-infinite matrix M is stochastic.

3 The substeps of the random walk

In what follows, we will construct a factorization of the stochastic matrix M appearing in (14) into the product of two stochastic matrices of the form

^0 0 0 Y1 X1 0

0 Y2 X2 0

0 Y3 X3 0

R1 S1 0 0 R2 52 0

0 R3 53 0

While the random process given by the matrix M leaves invariant the set P introduced below, see (28), this is not true for its substeps going along with this factorization. This section deals with this complication in great detail.

The multiplication formulas given in Theorem 2.2 restricted to g = a(6) give

cos(6)®m,k(a(6)) = Xa2(m, k)®m+ej,k(a(6)),

cos(6)®m,k(a(6)) = Xb2(m, k)®m-ej,k(a(6)). j=1

We recall that we fixed k with kn > 0 and we took m = m(w, r) as in (9). Also making the change of variables t = cos(6) we defined Fr,s (w, t) = ®m(w,r),k(a(6)). Now we make the following important observation

m(w, r) ± ej =

Introduce the following scalar functions

m(w ± 1, r) ± en+1 if j = 1,

m(w, r ± ej-1) ± en+1 if j = 2,...,n, m(w, r) ± en+1 if j = n + 1.

and the matrix function

Fr+s(w, t) = ®m(w,r)+en+1,k(a(6)),

F + (w, t) = (F+s(w, t))(r,s}eQxQ.

Then the first identity in (16) becomes

VtFr,s(w, t) = a2(m(w, r))Fr+s(w + 1, t) + ^ a?+1(m(w, r))Fr+_e.,s(w, t)

+ an+1(m(w, r))F+s(w, t).

For each w > 0, we define the following matrix of type ^ x ^

Xw = ((X w)r,s), Yw = ((Yw)r,s)i

( Xw ) r, s =

^w^s =

a1 (m(w, r)) if s = r,

0 otherwise,

a^+1(m(w, r)) if s = r, a2+1(m(w, r)) if s = r + ej, 0 otherwise.

Now the set of scalar identities (18) with (r, s) e fi x fi can be written as a matrix identity in the following more convenient way

VtF(w, t) = XwF + (w + 1, t) + YwF + (w, t). (20)

For each w > 0, we define the following matrix of type fi x fi

Rw = ((RW)Г'S)' Sw = ((Sw)r'S)' (21)

(Rw)r,s — (Sw)r,s —

b2(m(w, r) + e„+1) if s = r,

0 otherwise,

b^+1 (m(w, r) + e„+1) if s = r, b^+1(m(w, r) + e„+1) if s = r — ej, 0 otherwise.

If we multiply (20) by -/t and use the second multiplication formula given in (16), we obtain

tF(w, t) = Xw(Rw+1 F(w, t) + Sw+1 F(w + 1, t)) + Yw(RwF(w — 1, t) + SwF(w, t)) = ( X w Rw+1 + Yw Sw)F (w' t) + Xw Sw+1 F (w + 1, t) + Yw Rw F (w — 1, t),

since we claim that

VtF+(w, t) = RwF(w — 1, t) + SwF(w, t). (23)

Indeed we have

VtF+(w, t) = Vtom (w,r)+en+1,k(a(0))

= X b2(m(w, r) + e„+1)Or(w,r)+en+1—ej,k(a(0)) j=1

= bj(m(w, r) + e„+1)Of(w—1,r),k(a(0))

+ Xbj(m(w, r) + e„+1)Or(w,r—eJ-1'k(a(0)) j =2

+b2+1(m(w, r) + e„+l)Of(w'r)'k(a(0))'

where we used (17). On the other hand,

(Rw F (w — 1, t))r,s = ^ (Rw)r,q Fq,s (w — 1, t) qefi

= b2(m(w, r) + e„+1)Fr,s(w — 1, t),

(SwF(w, t))r,s = ^ ^( Sww ) r, q Fq, s (w, t) qefi

= b2n+1(m(w, r) + e„+x)Fr,s(w, t)

n— 1

+ X b2+1(m(w, r) + en+i)Fr—ej,s(w, t). j=1

Then (23) follows easily.

Finally, if we compare (22) with (11) in Theorem 2.3 we obtain

Aw = Yw RV^ Bw = X w Rw+1 + Yw SlWi Cw = X w Sw+1

which is equivalent to the factorization (15).

We end by checking that both matrices in the right-hand side of (15) are stochastic:

X/Yw)r,s + X(Xw)r,s = a2+1(m(w, r)) + ^ a2+1(m(w, r)) + a2(m(w, r)) = 1, sefi sefi 1< j <n—1

X/Rw)r,s + X(Sw)r,s = b2(m(w, r) + en+1) + b2rl+1 (m(w, r) + en+1)

vw)r,s ' / ,Ww)r,s sefi sefi

+ X b2j+1(m(w, r) + en+1) = 1,

1< j<n—1

where we used that Yn+1 a2(m, k) = Yn+1 b2(m, k) = 1, see (6).

Now we want to consider the random walks associated with the probability matrices appearing in (15),

Bo Co 0 A1 B1 C1 0 0 A 2 B2 C2 0

= M1 M2,

Y0 X 0 0 S0 0

0 Y1 X1 0 , M2 = R1 S1 0

0 Y2 X 2 0 0 R2 S2 0

Let Fw and F+ denote, respectively, the polynomial functions Fw = Fw (t) and F+ = F+ (t). Then (23) can be written as follows

F+ S0 0

F+ R1 S1

F2+ 0 R2

Similarly (20) gives

F0 Y0 X0 0

F1 0 Y1 X1

F2 0 Y2

We can now rewrite (22) in matrix form,

F0 F0+ F0 F0

F1 = v/гMx F1+ = M1 M2 F1 =M F1

F2 F+ F2 F2 F2

The state space of the random walks W, W1 , W2 associated, respectively, to the stochastic matrices M, M1, M2 is the set N>0 x and W is equal to the composition W1 o W2.

We recall that the map (w, r) ^ m(w, r) defined in (9) is an injection of N>0 x ^ into the k-spherical dual U(n + 1)(k) of U(n + 1), and its image is

P = {m e U(n + 1)(k) : sm = sk}, (28)

where sm = m1 + • • • + mn+1, sk = k1 + • • • + kn.

Let us now consider the random walk W1 associated with the stochastic matrix M1. Below we display the entries of M1 at the different sites of its (w, r)-row,

a^^w, r)) af+1(m(w, r)) a2(m(w, r)) 0

if m(w, s)-site = m(w, r), if m(w, s)-site = m(w, r + ej), if m(w, s)-site = m(w + 1, r), in other sites.

The appearance of the plus sign in the right-hand side of (26) makes it natural to consider instead the random walk W+ obtained from W1 by applying a shift by en+1. Thus, if the system is at state m(w, r) at time t, then at time t + 1 it can move in the following ways

m(w, r) ^ m(w, r) + en+1, m(w, r) ^ m(w, r) + ej+1, m(w, r) ^ m(w, r) + e1, m(w, r) ^ other states,

with probability a'2l+x(m(w, r)),

with probability a2:+1(m(w, r)),

with probability a2 (m(w, r)),

with probability 0,

because m(w, r+ej) + en+1 = m(w, r) + ej+1 for 1 < j < n - 1, andm(w + 1, r) + en+1 = m(w, r) + e1. This is in accordance with the following formula derived by looking at the ((w, r), s)-entry of (26),

cos(6)®m(w,r),k(a(6)) = Xa2(m(w, r))®m(w,r)+e',k(a(6)). j=1

Now it is worth to observe that W1+ does not leave invariant the subset P but extends to a

random walk W1 in U (n + 1)(k) defined by

WW1 : m ^ m + ej, with probability aj (m, k).

We proceed similarly with the random walk W2 associated to the stochastic matrix M2. Below we display the entries of M2 at the different sites of its (w, r)-row,

b2n+i(m(w, r) + e„+x) b2 +i(m(w, r) + e„+x) bn(m(w, r) + e„+i) 0

if m(w, s)-site = m(w, r), if m(w, s)-site = m(w, r — ej), if m(w, s)-site = m(w — 1, r), in other sites.

The appearance of the plus sign in the left-hand side of (25) makes it natural to consider instead the random walk W2— obtained from W2 by applying a shift by —en+1. Thus, if the system is at state m(w, r) at time t, then at time t + 1 it can move in the following ways

m(w, r) ^ m(w, r) — en+i, with prob. bj+i(m(w, r) + en+i),

m(w, r) ^ m(w, r) — ej+i, with prob. b2+i(m(w, r) + en+i),

m(w, r) ^ m(w, r) — ei, with prob. bj(m(w, r) + en+i),

m(w, r) ^ other states, with prob. 0,

becausem(w, r — ej) — en+1 = m(w, r) — ej+1 for 1 < j < n — 1,andm(w — 1, r) — en+1 = m(w, r) — e1. This is in accordance with the following formula derived by looking at the ((w, r), s)-entry of (25),

cos(0)®m(w,r)+en+',k(a(e)) = £b2(m(w, r) + en+1)®m(w,r)+e"+1—ej,k(a(9)).

Then W— does not leave invariant the subset P but extends to a random walk W2 in U (n + 1)(k) defined by

Win : m ^ m — ej, with probability bj (m + en+i, k)

for i < j < n + i.

The transition matrices of Wi and Win are, respectively, the following block bidiagonal matrices

Yo X0 0

0 Y XX i 0 0 Yn XXn 0

R si 0

0 Rn Sn0

(XX w)m,n = (Y''w)m,n = (RRw)m,n =

^n(m) 0

n+i(m) j +i (m)

if n = m,

otherwise,

if n = m,

if n = m + e j, otherwise,

bi (m + en+i) if n = m, 0 otherwise,

¿2+1(m + e„+x) if n — m,

+1 (m + e«+i)

if n — r — e j otherwise.

where m, n e U(n + 1)(k) are such that w — mi — ki — ni — ki, and 1 < j < n — 1.

Moreover, the stochastic matrix M corresponding to the composition W — WWi o WW2 is equal to M i M2, and it is given by

Bo C 0 0

A1 B1 C1 0

0 A 2 B 2 C2 0

( A w)m,n —

(CMw)m,n —

a2+1(m)b2(m + en+i) aj+1(m)b2 (m + e j+1) 0

if n — m if n — m + e j otherwise

( B w)m,n —

a2 (m)b2+1(m + ei)) a2 (m)b2 +1(m + ei) 0

X aj (m)b2j (m + e, ))

1<j<n+1

a2j+i(m)bl+i(m +e j+1)

a^m)^+1(m + en+i) a2+1(m)b2+1(m + e j+1) 0

if n — m if n — m — e j otherwise

if n — m

if n — m + e j if n — m — e j if n — m + e j — ei otherwise

where m, n e U(n + 1)(k) are such that w — mi — ki — ni — ki, and 1 < i, j < n — 1. The coefficients a(2(m), b2(m) for 1 < i < n + 1 are those defined in (5).

If we identify N>0 x ^ with the subset P, defined in (28), by (w, r) = m(w, r), then clearly W — W| p, because M become a submatrix of M. Therefore

W1 o W2 — W — WIP — W o W2)\P.

To conclude, the analysis of the random walk W associated with the stochastic matrix M is simplified by looking at the decomposition W — (Wi o W2)p instead of considering W — W1 o W2.

4 An urn model for U(3)

We now give a concrete probabilistic mechanism that goes along with the random walk W1 constructed in Sect. 3 by group theoretic means, see (29). An entirely similar construction going with W2 can be considered for the other substep of our process.

This section is included for the benefit of the reader. It describes in detail, for the simple case of n = 2 going along with the pair (U(3), U(2)), a construction that will be given in general in Sect. 5.

A configuration, or state of our system, is now a triple of integers m = (m 1, m.2, m3) subject to the constrains m i > ki > m2 > k2 > m3 with two fixed integers ki > k2, see (1). We describe a stochastic mechanism whereby one of the three values of the mi is increased by one with the following probabilities, see (5)

(mi - ki + 1)(mi - k2 + 2)

a2(m, k) = a|(m, k) = a2(m, k) =

(mi — m2 + 1)(mi — m3 + 2) '

(ki - m2)(m2 - k2 + 1) (mi - m2 + i)(m2 - m3 + i) '

(ki - m3 + i)(k2 - «3)

(m1 — m3 + 2)(m2 — m 3 + 1)

In the general scheme to be considered later, this case corresponds to the value n = 2, and thus, we start with two urns B1, B2. In urn Bj, j = 1, 2, place mj — kj + 1 balls of color Cj and kj — mj+1 balls of color dj. These four colors are all different. Notice that we could have no balls of colors d1 or d2 and that the total number of balls in urn Bj is mj — mj+1 + 1.

It will be useful to consider the following ordered set of urns

B1, B2, B1 U B2.

In view of the notation to be introduced in the general case, we denote these urns as

B1,1, B2,2, B1,2.

We will introduce later on an order among certain collections of urns that will yield, in this particular case,

B1,1 < B2,2 < B1,2.

Now perform a total of three consecutive experiments. Each experiment consists of drawing one ball at random (i.e. with the uniform distribution) from an urn in the ordered set of urns above, record the outcome as a letter in a word, and continue to the next experiment making sure to return the ball that has been drawn to its original urn after this experiment has been performed.

The first experiment consists of picking one ball from urn B1,1 = B1. This can give a ball of color C1 or d1. Record the outcome C1 or d1 as the first letter in a word of three letters, and return the ball to its original urn, B11.

The second experiment consists of picking one ball from urn B2,2 = B2. This can result in a ball of color either C2 or d2. Record the result as the second letter in a word that will have a total of three letters (the colors of the balls chosen in experiments 1,2,3), and return the ball to its original urn, B2,2.

The last experiment consists of picking one ball from the union of the urns B11 and B2,2, that is, urn B12. The color of the ball in question C1, d1, C2 or d2 is the last letter in our word. This last ball, that is, drawn from B12 = B1 U B2 is then returned to the urn B1 or B2 where it came from.

There is a total of sixteen (= 2 x 2 x 4) possible words that can arise in this fashion from an alphabet of four letters. These words constitute the set of all possible outcomes of the experiment made up of these three successive and properly ordered ones.

Since we return the chosen ball at the end of each one of these experiments to its original urn, we have that the state of the system has not yet changed. This is about to happen now.

We need a rule to decide which of the three values m 1, m2, m3 will be increased by one unit as the result of our experiment. To this end, we break up the set of sixteen words into

three disjoint and exhaustive sets. These sets will be denoted by 5^3, 5*2,3 and S3 3, and the sample space S3 of cardinality 16 is given by

S3 = U Sj3

Each set Sj,3 consisting of words with three letters will be obtained by a "growth process" starting from the sets we would have if we had considered the previous case, namely n = 1, when we have only one box and we were dealing with U(2). In that case, the sets are made up of words of one letter, either C1 or ^. To make the connection with the general case, we will denote these sets in the case of one urn by S1,2 and S2,2, and the sample space by S2 = S1,2 U S2,2. Explicitly S1,2 = {C1}, S2,2 = №}.

Let us come back to the case n = 2. The class S1,3 is formed by including all three letter words that start as those of S1,2 and whose remaining two letters are such that the last one is not d.2, that is, either C1, ^ or C2. Thus,

S1,3 = {(C1, C2, C1), (C1, C2, d1), (C1, C2, C2), (C1, d.2, C1), (c1, d.2, ¿1), (c1, d.2, C2)}.

The class S2,3 is formed by including all three-letter words that start as those of S2,2 and whose remaining two letters are such that the first one is not ¿2. Explicitly S2,3 is

S2,3 = {(d1, C2, C1), (d1, C2, d1), (d1, C2, C2), (¿1, C2, ¿2)}.

It should be noticed that the meaning of the requirement "not df is quite different when it applies to the second urn ¿2,2 as above, or to the third urn ¿1,2 as in the previous case.

Finally S3 3 is obtained by taking the union of all three- letter words that start as in S1,2 and have d2 as their last letter, together with all words that start as in S2,2 and have d2 as the second letter. It should be noticed that S3 3 is obtained by going over all the classes already built, S1,3 and S2,3, and replacing the condition not d2 by d2. The class S3 3 is thus made up of two sets of words, namely

S3,3 = {(C1, C2, d2), (C1, d2, d2)} U {(d1, d2, C\), d2, d\), (d1, d2, C2), (d1, d2, d2)}.

It takes almost no effort to see that all these 6 + 4 + 6 = 16 words have been classified into three disjoint and exhaustive classes.

Now we compute the total probability of getting a result that belongs to each class. For the first class S1,3, we have

(m1 — k1 + 1)(m1 - k2 + 2) 2

= (m, k).

- = 6*1

(m1 — m2 + 1)(m 1 — m 3 + 2) For the second class S2,3, we have that the probability is

k — m2)(m 2 — k2 + 1) 2

= (m, k).

(m1 — m2 + 1)(m 2 — m3 + 1) Finally, the total probability of the third class S3 3 is,

(m 1 — k1 + 1)(k2 — m3) + fa — m 2)(k2 — m3)

(m1 — m2 + 1)(m 1 — m3 + 2) (m 1 — m 2 + 1)(m2 — m3 + 1)

(k2 — m2)k — m3 + 1) 2

— = a2(m, k).

(m 1 — m3 + 2)(m2 — m 3 + 1) We are ready to give a rule for changing the state of the system in one unit of time. A result belonging to the subset Sj,3, j = 1, 2, 3, will lead to a transition to a new state m + ej,

where m j is increased by one. In terms of balls, this will be achieved by removing from each urn containing a ball of color dj—1 one of these balls, and adding to each urn containing a ball of color Cj one ball of this color from the bath. When j = 1 we do no removal.

5 An urn model for every U(n + 1)

In this section, we describe a random mechanism that gives rise to a Markov chain whose one-step transition matrix is

Yo Xo 0

0 Y1 X1 0

0 Y2 X2 0

0 Y3 X3 0

appearing in the factorization (15) and where the matrices Xi, Yi are defined in (19).

A configuration is a set of n + 1 values of the integers mi, 1 < i < n + 1, subject to the constrains m1 > k1 > m2 > ••• > mn > kn > mn+1 where the integers ki remain unchanged throughout time. We will construct a stochastic process whereby in one unit of time one of the m j is increased by one with probability given by

a2 (m, k) =

nn=1(ki — mj — i + j — 1)

Hi = j (mi — mj — i +j)

Consider n urns B1,..., Bn .In urn Bj place mj — kj + 1 balls of color Cj and kj — mj+1 balls of color dj. We assume that the colors Cj, dj are all different. It should be noticed that in urn Bj may be no ball of color dj, and that the total number of balls in Bj is mj — mj+1 + 1. Consider the following ordered set of urns

B1, B2, B1 U B2, B3, B2 U B3, B1 U B2 U B3,

Bn, Bn—1 U Bn,..., B1 U---U Bn.

The union of urns is an urn whose content is the union of the set of balls in each urn in the union. Observe that the total number of urns under consideration is n(n + 1)/2. Let

Bk,j = Bk U Bk+1 U---U Bj, 1 < k < j. Clearly Bjt j = Bj, and the set of all urns

{Bk,j : 1 < k < j < n}

is ordered lexicographically according to: (k, j) < (r, s) if j < s or if j = s and r < k.

We will perform a total of n(n + 1)/2 consecutive experiments. Each experiment consists of drawing one ball at random (i.e., with the uniform distribution) from each urn in the ordered set of urns, record the outcome as a letter in a word, and continue to the next experiment making sure to return the ball to the original urn after this experiment has been performed. One should think of a complete experiment as consisting of these n(n + 1)/2 individual experiments. The transition from the present state of the system to the next one takes place after the complete experiment is carried out.

The first experiment consists of picking one ball from urn B1,1, this can give a ball of color C1 or d1. The result is recorded and the ball is put back in urn B11. The second experiment consists of picking one ball from urn B2,2, this can result in either a ball of color C2 or d2.

Record the result as the second letter in a word that will have a total of n(n + 1)/2 letters. Put the ball back in urn ¿2,2. Keep on going by taking successively at random a ball from an urn Bkt j and adding the letter corresponding to its color to the right of the word obtained in the previous step. The process finishes once a ball of the last urn B1,n is picked and a final word of n(n + 1)/2 letters is obtained.

The alphabet is the set {Cj, dj : 1 < j < n} of 2n letters. Then the sample space Sn+1 consists of all words w of n(n + 1)/2 letters that can be written with such an alphabet with the restriction that the letters allowed in the place (k, j) correspond to the color of any ball in urn Bk, j . The cardinality of the sample space is

|S„+1|= J] 2(j — k + 1).

1<k<j<n

Now by induction on n > 1, we define a partition of Sn+1 into n + 1 disjoint subsets

Sn + 1 = Sj,n+1-

For the benefit of the reader, the construction will be spelled out in detail for small values of n after we describe it in the general case and prove Proposition 5.2. We start with S2 = S1,2 U S2,2 where

S1,2 = {d 1}, S2,2 = №}, d 1 = C1.

|Si,2l = I $2,2I = 1, IS2I = 2.

We make the following convention: the symbol dj in the (k, j)-place of a word stands for any color of a ball in urn Bkt j different from dj, and the letter x in the (k, j)-place of a word stands for any possible color of a ball in urn Bkt j. If n > 2 we set

51,n+1 = (Wi,n+1 = wi,nX ■ ■ ■ xdn e Sn+1 : wi,n e Si,n}.

Observe that the number of letters in the word wi,n+i to the right of the word win is n. Similarly we define

52,n+i = {W2,n+i = W2,nX ■ ■ ■ xdnX e Sn+1 : W2,n e S2,n}. More generally for 1 < j < n, we let

Sj,n + 1 = {wj,n+1 = wj,nx ■ ■ ■ xdnx ■ ■ ■ x e Sn+1 : wj,n e Sj,n}

where the number of x's to the right of dn is j — 1. The definition of Sn+i,n+i is more interesting, namely

Sn+i,n+i ={wn+i,n+i = wi,nx ■ ■ ■ xdn e Sn+i : wi,n e Si,n}

U {wn+i,n+i = w2,nx ■ ■ ■ xdnx e Sn+i : w2,n e S2,n} U ■ ■ ■ U {wn+i,n+i — wn,ndnx • • • x e Sn+i : wn,n e Sn,n}.

Proposition 5.1 Letn > 2. Then for 1 < j < n we have

\Sj,n+1\=\Sj,n\(2(n — j) + 1) J] 2(n — k + 1),

1<k<n, k=j

\ Sn +1, n+1 \ = X \Sj,n\ n 2(n — k + 1).

1<j<n 1<k<n, k=j

Proposition 5.2 {Sj,n+1 : 1 < j < n + 1} is a partition of the sample spaCe Sn+1. Proof The proof is by induction on n > 1. For n = 1, we have

52 = {d 1, d1}, S12 = {d 1}, S2,2 = {d1}.

Thus, the statement is true for n = 1. Now assume that Sn = |J"j=1 Sjn is a partition of Sn for n > 1. If w e Sn+1, then w = wj,nx ••• x where wj,n e Sjn for a unique j. The x in the j -place of the last n letters is either dn or of the form dn. In the first case w e Sn+1,n+1 and in the second case w e Sj,n+1. Thus, Sn+1 = |J"j+1 Sj,n+1. At the same time we saw that w e Sj,n+1 for a unique 1 < j < n + 1. This completes the proof. □

The construction above is now made explicit for small values of n.

1) n = 2.

S1,3 = {^1 xd2}, S2,3 = {d1 d2x}, S3,3 = {d 1 xd2} U {d1d2x}, \S1,3\ = 6, \S2,3\=4, \S3,3\ = 6, \ S3 \ = 16.

2) n = 3.

S14 = {d 1 xd 2 xxd 3}, S2,4 = {d1 d 2 xxd 3 x}, S3,4 = {d 1 xd2d3xx} U {d1d2xd3xx},

S4 4 ={d 1 xd2xxd3} U {d1 d2xxd3x} U {d 1 xd2d3xx} U {d^xd3xx},

\S1,4\ = 240, \S2,4\ = 144, \S3,4\ = 144, \S4,4\= 240, \S4\ = 768.

3) n = 4.

S15 = {d 1 xd 2 xxd 3 xxx d 4}, S2,5 = {d1 d 2 xxd 3 xxxd 4 x},

53 5 = {d 1 xd.2d3xxxd4xx} U {(¿1^2xd3xxxd4xx},

54 5 ={d 1 xd2xxd.3d4xxx} U {d.1 d2xxd.3xd4xxx}

U {d 1 xd2d3xxd4xxx} U {^1^2xd3xxd4xxx},

55 5 ={d 1 xd2xxd3xxxd*} U {d1 d2xxd3xxxd4x}

U {d 1 xd2d3xxxd4xx} U {^1^2xd3xxxd4xx} U {d 1 xd2xxd3d4xxx} U {d1 d2xxd3xd4xxx} U {d 1 xd2d3xxd4xxx} U {^1^2xd3xxd4xxx},

\S1,5\ = 80640, \ S2,5\ = 46080, \S3,5\ = 41472, \S4,5\= 46080, \S5,5\ = 80640, \S5\= 294912.

Theorem 5.3 The probability to obtain a word w e Sj n+1 is a? (m, k)forall 1 < j < n + 1.

Proof Given (m, k), let m' = (mi, ..., mn) and k' = (ki,..., kn—i). Then from (32), we get

ij - kn + n — j + 1

a2(m, k) = a2(m', k')

mj - mn+i + n - j + 1 '

for all 1 < j < n. This result allows us to prove the theorem by induction on n > 1. When n = 1, we have only one urn Bi with mi — ki + 1 balls of color ci and ki — m2 balls of color di. Thus, the probability to obtain a word in Si,2 is

mi — ki + 1 2

-— = a^(m, k),

m 1 — m2 + 1

where m = (mi, m2) and k = (ki). Similarly the probability to obtain a word in S2,2 is

k1 — m2 2

-— = a2 (m, k).

m 1 — m2 + 1

Thus, the theorem holds for n = 1. Now assume that the theorem is true for n > 1. If 1 < j < n, we have

Sj,n+i = (wj,n+i = wj,nx • • • xdnx • • • x e Sn+i : wj,n e Sj,n}

where the number of x's to the right of dn is j — 1. Thus, the probability to obtain a word w e Sj,n+i is equal to aj (m', k') times the probability to obtain the symbol dn from the urn Bj,n. Now we recall the composition of urn Bj,n. By definition

Bj,n = Bj U Bj+1 U • • • U Bn,

the total number of balls | Bj,n | = mj — mn+i + n — j + 1 and the number of balls of color dn is kn — mn+i. Therefore, the probability to obtain the symbol dn from urn Bj,n is

mj — kn + n — j + 1

mj - mn+i + n - j + 1

Hence, the probability to obtain a word w e Sj,n+i is

0 , , m j - kn + n - j + 1 0

aj(m', k')-j-—-—-= a2(m, k),

j mj - mn+i + n - j + 1 j

which establishes the theorem for all 1 < j < n. Since j<n+1 aj(m, k) = 1 (see (6)) and Sn+1 = IJ 1<j<n+1 Sj,n+1 is a partition of Sn+1 it follows that the statement of the theorem is also true for j = n + 1. □

Since we return the chosen ball at the end of each individual experiment to its original urn, we have that the state of the system has not yet changed. This is about to happen now.

The outcome of a complete experiment produces a word that belongs to one of the subsets Sj,n+1 in the partition of the sample space Sn+1. Depending on which subset turns up, we take a different action, thus obtaining a random walk in the space of configurations m = (m 1,..., mn+1) which satisfy the constraints m1 > k1 > • • • > mn > kn > mn+1 imposed by the fixed n-tuple k = (k1 ,...,kn ). This simple process will give for each configuration m a total of at most n + 1 possible nearest neighbors to which we can jump in one transition.

A result belonging to the subset Sj,n+1, j = 1,...,n + 1, will lead to a transition to a new state m + ej, where mj is increased by one. In terms of balls, this will be achieved by

Fig. 3 m = (8, 5, 1), k = (6, 3)

removing from each urn containing a ball of color dj—1 one of these balls, and adding to each urn containing a ball of color Cj one ball of this color from the bath.

It should be noticed that all these transitions keep the values of k1,...,kn unchanged and any transition that would violate the constrains does not occur because the corresponding probability a2 (m, k) vanishes.

6 A Young diagram model for U(3)

To each configuration m1 > k1 > m2 > ••• > mn > kn > mn+1 > 0, we associate its Young diagram which has m 1 boxes in the first row, k1 boxes in the second row, and so on down to the last row which has mn+1 boxes (Fig. 3).

We will construct a stochastic process whereby in one unit of time, one of the mi is increased by one with probability af(m, k) see (5). As in Sect. 5, this will require running some auxiliary experiments.

We start with the case n = 1. We perform the following experiment to decide if we will increase m 1 or m2: we choose to insert a box among one of the m1 — k1 last boxes of the first row or to delete a box from the k1 — m 2 last boxes of the second row. An insertion can occur either to the left or to the right of one of the m1 — k1 last boxes. We observe that there are m1 — k1 + 1 possibilities of an insertion and k1 — m2 possibilities of a deletion. All these are assigned the same probability.

As an output of the experiment, we get either a diagram with m 1 + 1 boxes in the first row, or a diagram with k1 — 1 boxes in the second row. Here, we are implicitly assuming that k1 > m2. If k1 were equal to m2, we would get no Young diagram. Thus, the sample space S of our auxiliary experiment consists of two (or one) Young diagrams which are obtained from the original one by adding one box to its first row or deleting one from its second row. Let S1 be the subset of S consisting of the diagram with one more box in the first row, and let S2 be the subset of S consisting of the diagram with one less box in the second row (or the empty set). Then the probability to obtain a diagram in S1 after the experiment is performed is

as we wished. In the first case, we go from the state (m, k) to (m + e1, k), and in the second case, we go from the state (m, k) to (m + e2, k).

Now let us assume that n = 2. In this case, we will perform three consecutive auxiliary experiments. The first experiment consists of inserting a box among one of the m1 — k1 last boxes of the first row or of deleting a box from the k1 — m 2 last boxes of the second row.

m1 — k1 + 1 -—- = a1 (m

m1 — m2 + 1 Similarly, the probability to obtain a diagram in S2 is

k1 — m2

a2(m, k)2,

a1(m, k)2.

m1 — m2 + 1

Fig. 4 m = (9, 5, 1), k = (6, 3)

D + e i =

Fig. 5 m = (8, 5, 1), k = (5, 3)

D-e2 =

The second experiment consists of inserting a box among one of the m2 — k2 last boxes of the third row or of deleting a box from the k2 — m 3 last boxes of the fourth row. Finally, the third experiment consists of inserting or deleting a box in one of the first four rows of the diagram as we did in the previous experiments; odd rows go along with insertion and even rows with deletion. If k1 > m2 and k2 > m3 the complete experiment gives rise to a triple (D1, D2, D3) of Young diagrams: D1 is obtained from the original one by adding one box to its first row or by deleting one box from the second row, D2 is obtained from the original one by adding one box to its third row or by deleting one box from the fourth row, and D3 is obtained by adding one box to the first or to the third rows of the original diagram or by deleting one box from the second or the fourth rows.

In what follows, we use the following notation: D denotes the Young diagram corresponding to the original configuration (m, k) and D' = D ± ej denotes, respectively, the diagram obtained from D by adding or deleting one box to the j-row of D, j = 1, 2, 3, 4. Observe that the sample space consists of all triples of Young diagrams (D1, D2, D3) with D1 = D + e1, D — e2, D2 = D + e3, D — e4, and D3 = D + e1, D — e2, D + e3, D — e4 (Figs. 4, 5).

Thus, our sample space S3 has generically 2 x 2 x 4 = 16 elements. The cardinality of S3 can be smaller, for example if k1 = m2 and k2 = m3, then \S3 \ = 6.

Let us partition the sample space S3 into the following three classes.

S1,3 = {(D1, D2, D3) : D1 = D+e1; D2 = D+e3, D —e4; D3 = D+e1, D+e3, D — e2}, S2,3 = {(D1, D2, D3) : D1 = D — e2; D2 = D + e3;

S3,3 = {(D1, D2, D3) : D1 = D + e1; D2 = D + e3, D — e4; D3 = D — e4} U {(D1, D2, D3) : D1 = D — e2; D2 = D — e4;

D3 = D + e1, D — e2, D + e3, D — e4}.

We have \ S1,3 \ = 6, \S2,3 \ = 4 and \S3,3 \ = 2 + 4 = 6. By simple inspection, we see that S3 is the disjoint union of S13, S2,3 and S3 3.

Then the probability to obtain a diagram in S13 after a complete experiment is performed

D3 = D + e1, D + e3, D — e2, D — e4},

(m1 — k1 + 1) (m 1 — k2 + 2) (m1 — m2 + 1) (m 1 — m3 + 2)

Similarly, the probability to obtain a diagram in S2,3 is

(ki — m2) (m 2 — k2 + 1) 2

= a2 (m, k).

(m1 — m2 + 1) (m2 — m3 + 1) Finally, the probability to obtain a diagram in S3 3 is

( m 1 — k1 + 1 ) ( m 1 — m 2 + 1 ) ( k1 — m 2 ) ( m 1 — m 2 + 1 ) (k2 — m3) (m1 — m3 + 2) (k2 — m3) (m2 — m3 + 1)

(k2 — m 2)(ki — m 3 + 1) 2

= -= a3(m, k),

(mi — m 3 + 2)(m2 — m3 + 1)

as desired.

If ki = m2 and k2 = m3 then |S131 = 4, S2,3 = 0 and |S3,31 = 2. The probability to obtain a diagram in S13 is

m1 — k2 + 2 2

-— = a2m k).

m1 — m3 + 2

The probability to obtain a diagram in S2,3 is 0 = a|(m, k), and the probability to obtain a diagram in S3 3 is

k2 — m3 2

= a2(m, k),

m1 — m3 + 2 as expected.

Now the state of our random walk is modified in one unit of time as follows: if the outcome of the complete experiment above belongs to Sj,3, then we go from (m, k) to (m + ej, k), j = 1, 2, 3. In terms of diagrams we move from D to D + e2j—1, j = 1, 2, 3.

7 A Young diagram model for every U(n + 1)

Given a Young diagram D corresponding to the original configuration (m, k), D' = D ± ej denotes, respectively, the diagram obtained from D by adding or deleting one box to the j -row of D, j = 1,..., 2n + 1. The stochastic process we are going to construct will have a transition mechanism determined by first performing a sequence of auxiliary experiments Ek, j to be described now. We start by considering the following set of consecutive pairs of rows of the diagram D,

((1, 2),(3,4),..., (2n — 1, 2n)}.

The experiment Ek, j , 1 < k < j < n, consists of inserting at random a box in an odd row i among the last mi — ki last boxes of such a row, or deleting at random a box in an even row i from the last ki — mi+1 last boxes of such a row. The row i is also chosen at random in the set of consecutive rows

(2k — 1, 2k,...,2 j}.

The sequence of experiments is obtained by ordering them by the lexicographic order Ek, j < Er,s if j < s or j = s and r < k. Thus, our sequence is the following one

Ei,i, E2,2, Ei,2, E3,3, E2,3, Ei,3, . .., En,n, En — 1,n, ..., Ei,n.

The symbol D ± e in the place corresponding to the experiment Ek, j of an n(n + 1)/2-tuple of diagrams, will stand for any possible outcome of Ek, j except the diagram D ± ei,

respectively. While an X in such a place stands for any outcome of Ek, j. For example in the case n = 2 considered before, see (33), we can write

S1,3 = {( d — e2, x , d — e4)},

S2,3 = {(D — e2, D — *4, X)},

S3,3 = {(D — e2, X, D — e4}) U {(D — e2, D — e4, X)}.

Now we have a convenient notation to define inductively, for n > 2, a growth process similar to the one introduced in Sect. 5, to break up the outcomes of the sample space Sn+1 into sets Sj,n+1 (j = 1,..., n + 1) starting from the partition of Sn into sets Sjn (j = 1,...,n). Let D j,n denote any n-tuple in the set Sj,n, then we set

S1,n + 1 = {D1,n+1 = (D1,n, X, ••• , X, D — ^2n) e Sn+1 : D1,n e S1,n}.

It is to be observed that the number of diagrams in the (n + 1)(n + 2)/2-tuple D1,n+1 to the right of the n(n + 1)/2-tuple D1,n is n. Similarly we define

S2,n+1 = {D2,n + 1 = (D2,n, X, ••• , X, D — ^2n, X) e Sn+1 : D2,n e S2,n}.

More generally for 1 < j < n, we let

Sj,n+1 = {Dj,n+1 = (Dj,n, X, ••• , X, D — ^2n, X, ••• , X) e Sn+1 : Dj,n e Sj,n}

where the number of X's to the right of D — ^2n is j — 1.

The definition of Sn+1,n+1 is (as in Sect. 5) more interesting, namely

Sn+1,n+1 = {Dn+1,n+1 = (D1,n, X, ••• , X, D — e2n) e Sn+1 : D1,n e S1,n} U {Dn+1,n+1 = (D2,n, X, ••• , X, D — e2n, X) e Sn+1 : D2,n e S2,n} U ••• U {Dn+1,n+1 = (Dn,n, D — e2n, X, ••• , X) e Sn + 1 : Dn,n e Sn,n}.

Now by induction on n > 2, it is easy to prove that {Sj,n+1 : 1 < j < n + 1} is a partition of Sn+1. Also by induction on n > 2 it is possible, as we did to established Theorem 5.3, to prove the following main result.

Theorem 7.1 The probability to obtain an n(n + 1)/2-tuple of diagrams Dj,n+1 e Sj,n+1 is a2 (m, k) (see (32)) for all 1 < j < n + 1.

The outcome of a complete experiment produces an n(n + 1)/2-tuple of Young diagrams that belongs to one of the partition subsets Sj,n+1 of the sample space Sn+1. Depending on which subset turns up, we take a different action, thus obtaining a random walk in the space of configurations m = (m1, ..., mn+1) which satisfy the constraints

m 1 > k1 > ••• > mn > kn > mn+1 > 0,

imposed by the fixed n-tuple k = (k1 ,...,kn). This simple process will give for each configuration m a total of at most n + 1 possible nearest neighbors to which we can jump in one transition.

A result belonging to the subset Sj,n+1, j = 1,...,n + 1 will lead to a transition to a new state m + ej, where mj is increased by one.

It should be noticed that all these transitions keep the values of k1,...,kn unchanged, and any transition that would violate the constrains does not occur because the corresponding probability a2 (m, k) vanishes.

Acknowledgments F. A. Grunbaum was supported in part by the Applied Math. Sciences subprogram of the Office of Energy Research, USDOE, under Contract DE-AC03-76SF00098. I. Pacharoni and J. Tirao were partially supported by CONICET, PIP 112-200801-01533.

References

1. Dette, H., Reuther, B., Studden, W., Zygmunt, M.: Matrix measures and random walks with a block tridiagonal transition matrix. SIAM J. Matrix Anal. Appl. 29(1), 117-142 (2006)

2. Feller, W.: An Introduction to Probability Theory and its Applications. 3rd edn. Wiley, New York (1967)

3. Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix valued spherical functions associated to the complex projective plane. J. Funct. Anal. 188, 350-441 (2002)

4. Grünbaum, F.A., Pacharoni, I., Tirao, J.: A matrix valued solution to Bochner's problem. J. Phys. A Math. Gen. 34, 10647-10656 (2001)

5. Grünbaum, F.A., Pacharoni, I., Tirao, J.: Spherical functions associated to the three dimensional hyperbolic space. Int. J. Math. 13(7), 727-784 (2002)

6. Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix valued orthogonal polynomials of the Jacobi type: the role of group representation theory. Ann. Inst. Fourier Grenoble 55(6), 2051-2068 (2005)

7. Grünbaum, F.A.: Random walks and orthogonal polynomials: some challenges. In: Pinsky, M., Birnir, B. (eds.) Probability, Geometry and Integrable systems. MSRI Publication vol. 55, pp. 241-260. See also arXiv math PR/0703375 (2007)

8. Kac, M.: Random walk and the theory of Brownian motion. Am. Math. Mon. 54, 369-391 (1947)

9. Karlin, S., McGregor, J.: Random walks. IIlinois J. Math. 3, 66-81 (1959)

10. Pacharoni, I.: Three term recursion relation for spherical functions associated to the complex projective space. Preprint (2010)

11. Pacharoni, I., Tirao, J.: Matrix valued orthogonal polynomials arising from the complex projective space. Constr. Approx. 25, 177-192 (2007)

12. Pacharoni, I., Tirao, J.: Three term recursion relation for spherical functions associated to the complex projective plane. Math. Phys. Anal. Geom. 7, 193-221 (2004)

13. Pacharoni, I., Tirao, J.: Matrix valued spherical functions associated to the complex projective space. Preprint (2010)

14. Pacharoni, I., Tirao, J.: Three term recursion relation for spherical functions associated to the complex hyperbolic plane. J. Lie Theory 17, 791-828 (2007)

15. Pacharoni, I., Tirao, J.: One step spherical functions of the pair (SU(n + 1), U(«)). Progress in Mathematics, Birkhauser, Verlag (2011, to appear)

16. Stanton, D.: Orthogonal polynomials and Chevalley groups. In: Askey, R., Koornwinder, T., Schempp, W. (eds.) Special Functions: Group Theoretical Aspects and Applications, D. Reidel Pub. Co., Dordrecht (1984)

17. Tirao, J.: Spherical functions. Rev. Unión Mat. Argent. 28, 75-98 (1977)

18. Tirao, J.A.: The matrix valued hypergeometric equation. Proc. Nat. Acad. Sci. USA 100(14), 81388141 (2003)

19. Vilenkin, N.J.: Special functions and the theory of group representations. Translations of Mathematical Monographs, vol. 22. American Mathematical Society, Providence (1968)

20. Vilenkin, N.J., Klimik, A.U.: Representation of Lie Groups and Special Functions. Vol. 3, Kluwer Academic Publishers, Dordrecht (1992)