Completely monotone sequences

Wildon's Weblog 2020-12-20

A sequence \lambda_0, \lambda_1, \ldots, \lambda_n is monotone decreasing if \lambda_0 \ge \lambda_1 \ge \lambda_2 \ge \ldots, or equivalently, if \lambda_j - \lambda_{j+1} \ge 0 for all j \in \mathbb{N}_0. We will decide once and for all to deal with decreasing sequences only, and say that such a sequence is completely monotone if all its iterated difference, including the zero differences, are non-negative. That is \lambda_j \ge 0, \lambda_j - \lambda_{j+1} \ge 0, \lambda_j - 2\lambda_{j+1} + \lambda_{j+2} \ge 0, \lambda_j - 3\lambda_{j+1} + 3\lambda_{j+2} - \lambda_{j+3} \ge 0 for all j \in \mathbb{N}_0, and so on. Equivalently,

\displaystyle \sum_{i=0}^k (-1)^i \binom{k}{i}\lambda_{j+i} \ge 0

for all j, k \in \mathbb{N}_0. One family of examples I stumbled across, in what seemed like a completely unrelated context of random walks defined on posets with involution, is the two-parameter family

\displaystyle \lambda^{(a,b)}_j = \frac{\binom{a+b}{a}(a+b+1)}{\binom{a+b+j}{b}(a+b+j+1)}.

For instance, \lambda^{(0,0)}_j = \frac{1}{j+1} for each j \in \mathbb{N}_0. For a direct proof that this sequence is completely monotone, we use the \beta-integral \int_0^1 x^a(1-x)^b \mathrm{d}x = \frac{1}{\binom{a+b}{a}(a+b+1)} to write

\displaystyle\begin{aligned}\sum_{i=0}^k& (-1)^i \binom{k}{i} \frac{\lambda^{(a,b)}_j}{\binom{a+b}{a}(a+b+1)} \\ &= \sum_{i=0}^k (-1)^i \binom{k}{i} \int_0^1 x^{a+j+i}(1-x)^b \mathrm{d}x \\ &= \sum_{i=0}^k \int_0^1 x^{a+j}(1-x)^{b+k} \\ &= \frac{1}{\binom{a+b+j+k}{b+k}(a+b+j+k+1} \\ &= \frac{\lambda^{(a,b+k)}_j}{\binom{a+b+k}{a}(a+b+k+1)}.\end{aligned}

In the same context, I came across some multivariable polynomials that I conjecture are always positive when evaluated at a completely monotone sequence. These polynomials include

\displaystyle \begin{aligned}f_1(x_0,x_1) &= x_0-x_1 \\ f_2(x_0,x_1,x_2) &= x_0^2 - 2x_0x_1 + 2x_0x_2 - x_1x_2 \\ f_3(x_0,x_1,x_2,x_3) &= x_0^3 - 3 x_0^2 x_1 + 5 x_0^2 x_2 - 3 x_0 x_1 x_2\\ & \quad - 3 x_0^2 x_3 + 5 x_0 x_1 x_3 - 3 x_0 x_2 x_3 + x_1 x_2 x_3. \end{aligned}

Their positivity is obvious for f_1, because f_1 is equal to a difference. For f_2 positivity follows from

\displaystyle f_2(x_0,x_1,x_2) = x_0(x_0-2x_1+x_2) + (x_0-x_1)x_2.

Despite much struggling, I was unable to find a similar expression for f_3 as a sum of products of positive differences. The main purpose of this post is to show that such expressions exist for linear functions, but not in general. I therefore may well have been on a wild-goose chase, hardly for the first time.

Linear polynomials

Fix n \in \mathbb{N}. Let \mathcal{C} \subseteq \mathbb{R}^n be the cone of all completely monotone sequences, as defined by the inequalities at the top of this post. Let \mathcal{D} \subseteq \mathbb{R}^n be the cone spanned by the coefficients in these inequalities.

To make these cones more explicit, the following notation is helpful. Let \Delta^k \lambda_j = \sum_{i=0}^k (-1)^i \binom{k}{i}\lambda_{j+i}. Then \Delta^{k-1} \lambda_j - \Delta^{k-1} \lambda_{j+1} = \Delta^{k} \lambda_j, and so \Delta^k \lambda_{n-1-k} + \Delta^{k-1} \lambda_{n-k} = \Delta^{k-1}\lambda_{n-1-k}. It follows inductively that \mathcal{D} is the set of non-negative linear combinations of the vectors v^{(k)} defined by

v^{(k)}_{n-1-i} = (-1)^i\binom{k}{i}

For instance, if n=5 then v^{(0)}, v^{(1)}, v^{(2)}, v^{(3)}, v^{(4)} are the rows of the 5 \times 5 matrix below

\left( \begin{matrix} \cdot & \cdot & \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot & 1 &  -1 \\ \cdot & \cdot & 1 & -2 & 1 \\ \cdot & 1 & -3 & 3 & -1 \\ 1 & -4 & 6 & -4 & 1 \end{matrix} \right)

Suppose that \sum_{i=0}^{n-1} a_i \lambda_i \ge 0 for all completely monotone sequences. That is, a \cdot \lambda \ge 0 for all \lambda \in \mathcal{C}; we write this as a \cdot \mathcal{C} \ge 0. By Farkas’ Lemma applied to the cone \mathcal{D}, either

  • a \in \mathcal{D}, or
  • there exists \lambda \in \mathbb{R}^n such that \lambda \cdot \mathcal{D} \ge 0 and a \cdot \lambda < 0.

In the `either’ case, since \lambda \cdot \mathcal{D} \ge 0, the sequence \lambda is completely monotone. But then a \cdot \lambda < 0 contradicts the hypothesis on a. Therefore a \in \mathcal{D}. So we have a simple necessary and sufficient condition for a linear polynomial to be non-negative on all completely monotone sequences: this is the case if and only if the coefficients are a linear coefficients of the vectors v^{(0)}, v^{(1)}, \ldots, v^{(n-1)}.

Quadratic example

I had hoped that something similar might work for quadratic and higher degree polynomials based on analogously defined cones coming from the coefficients of products in the differences. Here is an counterexample to this hope.

Take n=3 and order monomials lexicographically x_0^2, x_0x_1, x_0x_2, x_1^2, x_1x_2, x_2^2. The homogeneous quadratics in the difference polynomials x_0, x_1,x_2, x_0-x_1,x_1-x_2,x_0-2x_1+x_2 span the same subspace as the rows of the matrix below.

\left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 1 & 0 & -2 & 1 \\ 0 & 1 & -1 & -2 & 3 & -1 \\ 1 & -4 & 2 & 4 & -4 & 1 \end{matrix}\right)

For instance, the penultimate row has the coefficients for (\lambda_0-2\lambda_1+\lambda_2)(\lambda_1-\lambda_2). Define g(x_0,x_1,x_2) = (x_0-x_1)(x_0-x_1-x_2)+x_1^2.

Claim. g(\lambda_0, \lambda_1, \lambda_2) \ge 0 for all completely monotonic sequences (\lambda_0,\lambda_1,\lambda_2).

Proof. Since the polynomial is homogeneous, we may assume that \lambda_0 = 1. Since g is linear in x_2, when evaluated at 1,\lambda_1,\lambda_2, it is negative if and only if

\displaystyle \lambda_2 \ge \frac{(1-\lambda_1)^2+\lambda_1^2}{1-\lambda_1} = \frac{1-2\lambda_1 + 2\lambda_1^2}{1-\lambda_1}

This inequality implies that

\displaystyle \lambda_1 - \lambda_2 \le \frac{-1 + 3\lambda_1 - 3\lambda_1^2}{1-\lambda_1}

but the quadratic 1 - 3y + 3y^2 - 3y = 3(y-1/2)^2 + \frac{1}{4} is always at least \frac{1}{4}. Hence \lambda_1 -\lambda_2 \le -\frac{1}{4}, contradicting complete monotonicity. \qquad\Box

Claim. g is not a positive linear combination of homogeneous quadratics in the differences x_0,x_1,x_2, x_0-x_1,x_1-x_2, x_0-2x_1+x_2.

Proof. It is equivalent to show that the coefficients of g, namely (1, -2, -1, 2, 1, 0) are not in the cone of positive linear combinations of the 6 \times 6 matrix above. Using the computer algebra Magma, one can compute its dual cone, which turns out to be the set of positive linear combinations of the rows of the matrix below.

\left(\begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 4 & 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 1 & 0 & 0 & 0 \\ 4 & 2 & 0 & 1 & 0 & 0 \\ 4& 3 & 2 & 2 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 \end{matrix}\right)

In particular, the dual cone contains (2,1,1,0,0,0), and since (1, -2, -1, 2, 1, 0) \cdot (2,1,1,0,0,0) = -1, (1,-2,-1,2,1,0) is not in the dual of the dual cone, as required.\qquad\Box

I suspect that there is no easy way (or maybe even no algorithmic way?) to decide if a homogeneous polynomial in a completely monotone sequence is non-negative, but would be very happy to be proved wrong on this.