Matrix product and sum rule for Macdonald polynomials

We present a new, explicit sum formula for symmetric Macdonald polynomials $P_\lambda$ and show that they can be written as a trace over a product of (infinite dimensional) matrices. These matrices satisfy the Zamolodchikov--Faddeev (ZF) algebra. We construct solutions of the ZF algebra from a rank-reduced version of the Yang--Baxter algebra. As a corollary, we find that the normalization of the stationary measure of the multi-species asymmetric exclusion process is a Macdonald polynomial with all variables set equal to one.


Introduction
Symmetric Macdonald polynomials [16,17] are a family of multivariable orthogonal polynomials indexed by partitions, whose coefficients depend rationally on two parameters q and t. In the case q = t they degenerate to the celebrated Schur polynomials, which are central in the representation theory of both the general linear and symmetric groups. Let m λ denote the monomial symmetric polynomial indexed by a partition λ, i.e. the symmetric polynomial defined as the sum of all monomials x µ = x µ1 1 · · · x µn n where µ ranges over all distinct permutations of λ = (λ 1 , . . . , λ n ). The Macdonald polynomials are defined as follows: Definition 1 Let ·, · denote the Macdonald inner product on power sum symmetric functions ( [17], Chapter VI, Equation (1.5)), where < denotes the dominance order on partitions ( [17], Chapter I, Section 1). The Macdonald polynomial P λ (x 1 , . . . , x n ; q, t) is the unique homogeneous symmetric polynomial in (x 1 , . . . , x n ) which satisfies P λ , P µ = 0, λ = µ, P λ (x 1 , . . . , x n ; q, t) = m λ (x 1 , . . . , x n ) + µ<λ c λ,µ (q, t)m µ (x 1 , . . . , x n ), i.e. the coefficients c λ,µ (q, t) of the lower degree terms are completely determined by the orthogonality conditions.
Up to normalization, Macdonald polynomials can alternatively be defined as the unique eigenfunctions of certain linear difference operators acting on the space of all symmetric polynomials [17]. They can also be expressed combinatorially as multivariable generating functions [8,9,21], or via symmetrization of non-symmetric Macdonald polynomials that are computed from Yang-Baxter graphs [14,15].
The purpose of this article is to report on an explicit matrix product formula for Macdonald polynomials [2] inspired by recent results on the multi-species asymmetric exclusion process, and to provide an explicit sum rule for calculating Macdonald polynomials resulting from the matrix product formula [6].
In the following we need the polynomial representations of the Hecke algebra of type A n−1 , with generators T i given by where s i is the transposition operator with action s i f (. . . , x i , x i+1 , . . . ) = f (. . . , x i+1 , x i , . . . ) on functions in (x 1 , . . . , x n ). It can be verified that the operators (1) indeed give a faithful representation of the Hecke algebra: In view of the relations for the generators, we can define T σ unambiguously as any product of simple transpositions T i which gives the permutation σ.
As explained in [2], the polynomials {f µ } are related to the non-symmetric Macdonald polynomials [3,4,19] via an invertible triangular change of basis, and hence form a basis for the ring of polynomials in n variables. Specializations of these polynomials at q = x 1 = · · · = x n = 1 give stationary particle configuration probabilities of the multi-species asymmetric exclusion process on a ring.
Summing over all {f µ=σ•λ } σ∈Sn results in a symmetric Macdonald polynomial [18]: Then Suppose we have (semi-infinite) matrices A 0 (x), A 1 (x), . . . , A r (x) and S satisfying the following exchange relations for all 0 ≤ i < j ≤ r. The main result of [2] is a matrix product formula for the polynomials f σ•λ : Theorem 1 There is an explicit representation of A 0 (x), A 1 (x), . . . , A r (x) and S with r = λ 1 satisfying (2), (3) and (4) such that f µ can be written as a matrix product, i.e.
where µ is a permutation of λ and Ω λ is a normalization factor which only depends on the partition λ.

Corollary 1
It follows from Lemma 1 that the symmetric Macdonald polynomial P λ can be expressed as a sum over matrix product formulas. The specialization of P λ at q = 1 and x i = 1 (i = 1, . . . , n) is the normalization of the stationary state of a multi-species asymmetric exclusion process on a ring.
As a consequence of Theorem 1 we derive an explicit sum formula for Macdonald polynomials [6]. To formulate this result we need to prepare some notation. Let λ be a partition whose largest part is λ 1 = r. For all 0 ≤ k ≤ r, we define a partition λ[k] by replacing all parts in λ of size ≤ k with 0. For example, for λ = (3, 3, 2, 1, 1, 0) we have

Theorem 2 ([6]) The Macdonald polynomial P λ can be written in the form
with coefficients (i) that satisfy C i (λ, µ) = 0 if any 0 < λ k < µ k , and otherwise We point out that the formula given in (5) has many structural features in common with the work of Kirillov and Noumi [11,12]. In these papers the authors construct families of raising operators, which act on Macdonald polynomials by adding columns to the indexing Young diagram. In [11] the raising operators have an analogous form to Macdonald q-difference operators, while in [12] the raising operators are constructed in terms of generators of the affine Hecke algebra. In both papers the Macdonald polynomial is obtained by the successive action of such raising operators on 1, the initial state. It would be very interesting to find a precise connection between the results of [11,12] and our formula (5), if one exists.
Two variable example. Let us demonstrate (5) in the case λ = (3, 1). We have We compute each sum in turn, starting with the rightmost: Combining with the middle sum, we find that where we have used the fact that C 1 3 1 3 0 = 1 and C 1 3 1 0 3 = 0. Combining everything and passing to the leftmost sum, we have (i) We will use three notations for the coefficients interchangeably:

Yang-Baxter and Zamolodchikov-Faddeev algebras
In the remainder of this note we sketch the proof of Theorem 1. It is relatively simple to show that if the functions f µ defined in Definition 2, with µ a permutation of a partition λ, can be written as a matrix product formula as in Theorem 1, then the matrices need to satisfy the exchange relations (2)-(4). These can be conveniently rewritten.
As before, let r = λ 1 be the largest part of λ, we call r the rank. It was shown in [22] for r = 1 and general r in [5], see also [2], that (2)-(4) are equivalent to the Zamolodchikov-Faddeev (ZF) algebra [24,7] whereŘ is a twisted version of the U t 1/2 (sl r+1 ) R-matrix and A = A (r) (x) is an (r + 1)-dimensional operator valued column vector given by Let furthermore E (ij) denote the elementary (r + 1) × (r + 1) matrix with a single non-zero entry 1 at position (i, j). Then equation (4) can be rewritten as where the rank r is again implicit, i.e. A = A (r) (x) and S = S (r) , and the R-matrix is explicitly given by Example of a rank 1 solution to ZF algebra. We give a simple explicit example for the case r = 1.

Using the following functions
equation (6) for A(x) = 1 x and r = 1 explicitly becomes:

General rank
For general rank r, solutions to (6) are more difficult, but can be recovered from the Yang-Baxter algebra which is given byŘ where L(x) = L (r) (x) is an (r + 1) × (r + 1) operator-valued matrix. The algebra (8) is well-studied and many solutions for L(x) are known. For the application to Macdonald polynomials the elements of L(x) are given in terms of generators {k, φ, φ † } of the t-boson algebra: We can construct solutions of (6) by rank-reducing the Yang-Baxter algebra (8) in the following way. Assume a solution of the following modified Yang-Baxter algebrǎ in terms of an (r + 1) × r operator-valued matrixL(x) =L (r) (x) and where the rank of the R-matrix on the right hand side is one lower than that on the left hand side. Assume also an operator s = s (r) that satisfies gives a solution to (6) provided that the operator entries ofL (a) (x) commute with those ofL (b) (y), for all a = b. The usual way to ensure this commutativity is to demand that the entries ofL (a) act on some vector space V a whileL (b) act on a different vector spaces V b , and indeed we shall adopt this approach.
Rank 1 example continued. The solution to the Yang-Baxter algebra (8) corresponding to rank r = 1 is equal to where the operators φ, φ † and k satisfy the t-boson relations (9). We note that trivialising the t-boson by sending φ † , φ → 1 and k → 0, we reduce the rank, and thus obtain the solution A (1) (x) =L (1) (x) given in (7): Rank 2 solution to ZF algebra. The rank 2 case gives rise to operator valued solutions for A (2) (x). The associated rank 2 solution to the Yang-Baxter algebra is where {φ 1 , φ † 1 , k 1 } and {φ 2 , φ † 2 , k 2 } are two commuting copies of the t-boson algebra (9). The map φ † 1 , φ 1 → 1 and k 1 → 0 reduces the rank of L (2) (x) by one where the indices of t-bosons are redundant in the final matrix, since we no longer need to distinguish between the two copies of the algebra.
Indeed, we find that (10) is satisfied We thus construct a solution of the ZF algebra in the following way: which for x = 1 is the matrix product solution to the stationary state of the two-species ASEP.
General solution. For completeness we include the general solution which does not look illuminating in algebraic form. A natural and intuitive combinatorial description of this solution is given in [2] but due to lack of space we are not able to present this here. Assume λ ⊆ r n and introduce the following family of (r − s + 2) × (r − s + 1) operator-valued matrices L (s) (x), 1 ≤ s ≤ r. We index rows by i ∈ {0, s, . . . , r} and columns by j ∈ {0, s + 1, . . . , r} (ii) , and take L (s) This general rank solution for x = 1 in terms of t-bosons was recently obtained in [20,1]. A generalisation of these results that includes a spectral parameter was found earlier in [10] and independently in the case of super-algebras in [23]. We also introduce a twist operator where we perform the reparametrization q = t u . The operators {k, φ, φ † } are generators of the t-boson algebra (9) with subscripts used to denote commuting copies of the algebra. Note that all operators in L (s) (x) implicitly also carry an index (s), as we will adopt the convention that operators in L (s) (x) and L (s ′ ) (x) (as well as S (s) and S (s ′ ) ) commute for s = s ′ .

A polynomial example
We look at an explicit example for rank 2 taking δ = (0, 0, 1, 1, 2, 2). In this case f δ corresponds to the nonsymmetric Macdonald polynomial E δ [2], which, using the notation q = t u , is given by We now verify the matrix product form for this explicit solution. From (11) we see that (ii) This unusual indexing of the entries of the matrix is the most convenient for our purposes, since we ultimately want to identify these matrix elements with the partitions λ[s] introduced in Section 2. and using (4) we note that S should satisfy An explicit representation for the t-bosons in terms of semi-infinite matrices is given by and S has the form S = k u = diag{1, t u , t 2u , . . .} = diag{1, q, q 2 , . . .}.
Up to a normalization, the nonsymmetric Macdonald polynomial E δ is now represented in matrix product form by where other terms involving unequal powers of φ and φ † have zero trace. Normalising with Ω δ + = Ω 221100 = Tr(k 2 S) we finally get which can be shown to equal (12) using the explicit representation (13).

Conclusion
We have derived a new explicit formula for symmetric Macdonald polynomials using the matrix product formalism. Our main results are the matrix product formula Theorem 1 for the distinguished basis {f σ•λ } σ∈Sn in the ring of homogeneous polynomials of degree λ. In the specialisation q = 1 and x 1 = x 2 = · · · = x n = 1 these give the (unnormalized) stationary probabilities for the multi-species asymmetric exclusion process. The limits t → 0 (or t → ∞) give the stationary probablities for the totally asymmetric exclusion process, and a recent derivation of the matrix product formula for those was given in [13] using the tetrahedron equation for three-dimensional integrability. While there are several similarities with our approach, the resulting expressions for the matrices in terms of bosonic operators in [13] are different from those exhibited here. By Lemma 1 the matrix product formula for the polynomials f leads to the result for Macdonald polynomials. We mention here that the normalization factor Ω λ in Theorem 1 can be calculated explicitly, where r is the largest part of λ. As a nontrivial corollary we obtain a new summation formula for Macdonald polynomials, presented in Theorem 2. We give explicit examples of several of our constructions.