# 1 - An Introductory Illustration


In this post we complement the Introduction to the blog by detailing how the techniques we will cover apply to a concrete problem from lattice path enumeration. Lattice paths form a cornerstone of combinatorics, with a rich history going back centuries1, and their study forms a nice laboratory to develop (and exhibit) tools in algorithmic combinatorics. The point of this post is to give a high-level view and show off the power of the methods, rather than explain all necessary details. The techniques discussed here will be described in much greater detail in later posts.

Consider walks on the integer lattice $\Z^2$ which begin at the origin and take North $(0,1)$, South $(0,-1)$, East $(1,0)$, and West $(-1,0)$ steps. To be precise, a walk of length $n>0$ is a sequence of $n$ of these steps (by convention we say there is a single walk of length zero). Because there are four choices for each step, there are $4^n$ distinct walks of length $n$.

In order to study a more interesting combinatorial object we now add the restriction that the walks must always stay in the quadrant $\N^2$. Walks in orthants (such as a two-dimensional quadrant) are important in queuing theory as they model jobs/people entering and being processed out of queues (and the number of jobs/people in a line can never be negative).

How many such walks are there of length $n$? Equivalently (since we know there are $4^n$ walks with no restriction on where they can go) what is the probability that a walk of length $n$ never leaves the first quadrant? The following animation shows 100 random walks of length 300. Walks go transparent as they leave the first quadrant, and only 1 of the walks stays in the first quadrant for all 300 steps.

Is the ratio $1/100$ representative of the probability that a walk of length 300 stays in the first quadrant? What about for walks of length 1000, or generally for walks of length $n$? The point of this post is to investigate such questions.

### Playing Around With the Counting Sequence

We start by letting our computer do the heavy lifting, using the recursive nature of a walk to easily calculate the number of walks of a fixed length. First, we define a Sage function to count the number of walks of length $n\in\N$ with fixed endpoint $(i,j)\in\N^2$.

@CachedFunction
def WalksIJ(i,j,n):
if i<0 or j<0:
return 0
elif n==0 and i==0 and j==0:
return 1
elif n==0:
return 0
else:
return WalksIJ(i-1,j,n-1) + WalksIJ(i+1,j,n-1) + WalksIJ(i,j-1,n-1) + WalksIJ(i,j+1,n-1)


Note the command @CachedFunction tells Sage to save the results of evaluating the function so that future evaluations with the same arguments can be looked up. It is now simple to count all walks of a given length.

def Walks(n):
return sum([WalksIJ(i,j,n) for i in range(n+1) for j in range(n+1)])


Let $w_n$ denote the number of walks of length $n$, under our restrictions. We use this function to generate and print out the terms $w_0,w_1,\dots,w_{10}$.

LST = [Walks(k) for k in range(11)]
print(LST)

> [1, 2, 6, 18, 60, 200, 700, 2450, 8820, 31752, 116424]


Using these initial terms, Sage can even look up the sequence directly in the Online Encyclopedia of Integer Sequences.

OE = oeis(LST)[0]
print(OE)

>  "A005566: Number of walks of length n on square lattice, starting at origin, staying in first quadrant."
>  "0: a(n) is the number of involutions of length 2n which are invariant under the reverse-complement map and have no decreasing subsequences of length 5. - _Eric S. Egge_, Oct 21 2008"


This verifies our function is working correctly (at least for walks of small size).

It takes about 20 seconds (using a server on the lowest paid tier on CoCalc) to generate $w_{100}$ using our Walks function. The computation for $w_{300}$ (which would allow us to find the probability that a walk of length 300 stays in the first quadrant) does not return after five minutes2.

Although this approach can be tweaked for some speedup, it has many fundamental limitations. To find terms of large index, we need a new strategy.

### Finding a Linear Recurrence

The key to our more refined approach will be to determine a linear recurrence satisfied the by the sequence $w_n$. This recurrence can then be used to compute a large number of terms.

We start by asking Sage to guess a recurrence for the sequence. This requires the ore_algebra package.

# Import the commands of the ore_algebra package
from ore_algebra import *

# Create the shift ore_algebra with rational coefficients
Ind.<n> = PolynomialRing(QQ)
Shift.<Sn> = OreAlgebra(Ind)

# Guess a recurrence satisfied by the sequence (need about 30 terms to find the recurrence)
LST = [Walks(k) for k in range(30)]
rec = guess(LST,Shift)
print(rec)

> (-n^2 - 7*n - 12)*Sn^2 + (8*n + 20)*Sn + 16*n^2 + 48*n + 32


The ore_algebra package represents a linear recurrence as a linear combination of terms of the form $n^pS^q$. Such expressions act on sequences by defining $(n^pS^q) \cdot a_n = n^pa_{n+q}$ and extending linearly. An expression $\mathcal{L}$ represents all sequences $a_n$ such that $\mathcal{L} \cdot a_n = 0$, and a single sequence is determined by specifying a sufficient number of initial terms.

In our example, Sage has guessed that the sequence satisfies the recurrence

$$(n^2 + 7n + 12)w_{n+2} = (8n + 20)w_{n+1} + (16n^2 + 48n + 32)w_n, \label{eq:rec}$$

which uniquely determines the sequence after specifying the terms $w_0=1$ and $w_1=2$. Of course, any finite sequence can be made to fit infinitely many linear recurrences, but Sage is smart enough to fail unless certain checks are passed which give confidence that the recurrence is actually satisfied by the sequence.

Fitting a sequence into a recurrence is essentially a linear algebra problem. If one fixes the coefficient degree and order of the linear recurrence (i.e., the highest powers of $n$ and $S$ that appear) then given enough terms the corresponding linear system will be (perhaps massively) overdetermined, meaning a ‘spurious’ solution is unlikely. Essentially put another way, one can guess a recurrence with some number of terms (say $w_0,\dots,w_{100}$) then check that additional terms (say $w_{101},\dots,w_{200}$) also satisfy the recurrence. More advanced considerations can be made, for instance looking at the p-curvature of the operator modulo various primes $p$, looking at the size of the integer coefficients in the recurrence, etc.

Such checks make it extremely unlikely that Sage guesses a recurrence not actually satisfied by $w_n$. In later posts we will see computational methods of rigorously proving that this recurrence is indeed satisfied for this example, combining the kernel method for formal power series manipulation and the computer algebra technique of creative telescoping.

For now, assuming this recurrence is correct and running

LongLST = rec.to_list(LST,301)
print(LongLST[300])

> 17523545967408829999363642806657506800280885492603311381994686521023183995818494526343136704823884594622982400835490350268164727785035412382829989246361352225213220805387866514176


gives $w_{300}$ in roughly 0.015 seconds (it is approximately $1.75\times10^{178}$), meaning the probability that a walk of length 300 stays in the first quadrant is

(LongLST[300]/4^(300)).n()

> 0.00422303415339021


or roughly $1/250$.

### A Stab at Asymptotics

In addition to computing large index terms in the sequence, the linear recurrence relation can also be used to characterize the asymptotic behaviour of the sequence. First, we note that the solutions of the linear recurrence $\eqref{eq:rec}$ form a complex vector space (because the recurrence is linear, the sum of two solutions is still a solution, and multiplying every term in a solution gives another solution). Because the first two terms of a solution to $\eqref{eq:rec}$ uniquely specify the solution, this vector space has dimension two.

In particular, since our sequence $w_n$ is a solution of $\eqref{eq:rec}$ then given any two sequences $\Psi_1(n)$ and $\Psi_2(n)$ satisfying $\eqref{eq:rec}$ which are linearly independent (meaning one cannot be obtained from the other by multiplying all terms by a fixed complex number) there exist $\lambda_1,\lambda_2\in\C$ such that $$w_n = \lambda_1\Psi_1(n) + \lambda_2\Psi_2(n).$$

The trick to characterizing asymptotics of $w_n$ is to use a technique to determine a basis of solutions whose asymptotic expansions can be computed; this is often referred to as the Birkhoff-Trjitzinsky method, and will be detailed in a future post (for now see [WZ85] for further information).

The following code performs the necessary computation for this example.

# Compute the first two asymptotic terms of a basis of the recurrence
rec.generalized_series_solutions(2)

> [4^n*n^(-1)*(1 - 3/2*n^(-1) + O(n^(-2))), (-4)^n*n^(-3)*(1 - 9/2*n^(-1) + O(n^(-2)))]


Sage has found a basis for the solution space of $\eqref{eq:rec}$ which consists of two sequences with asymptotic expansions \begin{align} \Psi_1(n) &= \frac{4^n}{n}\left(1 - \frac{3}{2n} + O\left(\frac{1}{n^2}\right) \right) \notag \\[+2mm] \Psi_2(n) &= \frac{4^n}{n^3}\left(1 - \frac{9}{2n} + O\left(\frac{1}{n^2}\right) \right) \notag \end{align} as $n\rightarrow\infty$ (recall $O(n^{-2})$ denotes a term $a_n$ such that $|a_n| \leq C/n^2$ for some fixed constant $C$ and all $n$). Since $$w_n = \lambda_1\Psi_1(n) + \lambda_2\Psi_2(n),$$ we have determined the asymptotic behaviour $w_n \sim \lambda_1 4^n/n$, assuming the constant $\lambda_1$ is non-zero.

The constants $\lambda_1$ and $\lambda_2$ are known as the connection coefficients for $w_n$ and the basis $\Psi_1$ and $\Psi_2$. Heuristically, running the code

LongLST = rec.to_list(LST,10001)
print((LongLST[100]/(4^100/100)).n())
print((LongLST[1000]/(4^1000/1000)).n())
print((LongLST[10000]/(4^10000/10000)).n())

> 1.25446885783361
> 1.27133302123899
> 1.27304859221955


suggests that $w_n \sim \lambda_1 4^n/n$ for a constant $\lambda_1=1.27\dots$.

### A Turn to Differential Equations

In order to be more precise about the connection coefficient $\lambda_1$, we turn from linear recurrences to linear differential equations. Recall that the generating function of $w_n$ is the function3 with power series $$W(z) = \sum_{n \geq 0} w_nz^n = 1 + 2z + 6z^2 + 18z^3 + \cdots \label{eq:GF}$$ at the origin. Note that $w_n \leq 4^n$ for all $n$ (as there are only 4 possible steps) so the root test implies this power series converges when $|z|<1/4$.

We will see in later posts that a sequence satisfies a linear recurrence with polynomial coefficients (in the index $n$) if and only if the generating function is D-finite (satisfies a linear differential equation with polynomial coefficients). Furthermore, one can automatically convert between recurrences for the sequence and differential equations for the generating function.

One can also guess differential equations directly using ore_algebra.

# Create the differential algebra to encode linear differential equation
Pols.<z> = PolynomialRing(QQ)
Diff.<Dz> = OreAlgebra(Pols)

# Guess an annihilating linear differential equation for generating function
diffWalk = guess(LST,Diff)
print(diffWalk)

> (16*z^4 - z^2)*Dz^3 + (128*z^3 + 8*z^2 - 6*z)*Dz^2 + (224*z^2 + 28*z - 6)*Dz + 64*z + 12


This output means Sage guesses $W(z)$ satisfies the differential equation \begin{align} 0 = z^2(16z^2 - 1)W^{(3)}(z) &+ z(128z^2 + 8z - 6)W^{(2)}(z) \notag \\ &+ (224z^2 + 28z - 6)W’(z) \label{eq:diff} \\ &+ (64z + 12)W(z) \notag \end{align}

Instead of guessing both a recurrence and a differential equation, we can convert directly between a differential equation for a function and recurrence for its series coefficients using the to_S and to_D commands.

# Converting from the differential equation to a recurrence
# gives rec (up to a left multiple)
print(Diff(diffWalk).to_S(Shift))
print((n + 2)*rec)

> (-n^3 - 9*n^2 - 26*n - 24)*Sn^2 + (8*n^2 + 36*n + 40)*Sn + 16*n^3 + 80*n^2 + 128*n + 64
> (-n^3 - 9*n^2 - 26*n - 24)*Sn^2 + (8*n^2 + 36*n + 40)*Sn + 16*n^3 + 80*n^2 + 128*n + 64


Note that the recurrence returned by the to_S command is not exactly the same as the recurrence previously found (it is the previous recurrence multiplied by $n+2$, which does not change its set of solutions).

### Asymptotics up to a Constant

The most powerful techniques for determining asymptotics of a sequence use tools from complex analysis on its generating function. Roughly speaking, information about the generating function at each of its singularities (points where the function behaves badly) can be translated into a contribution to asymptotic behaviour. Skipping over some technicalities, the function $W(z)$ admits a convergent power series expansion around most points in the complex plane. Near singularities, however, $W(z)$ has series expansions which are not power series, because of terms with negative or non-fractional exponents, or terms with logarithms. Asymptotics of $w_n$ can be determined by finding coefficients in these singular expansions of $W(z)$ and adding the asymptotic contributions of each of the singularities.

Studying solutions of (linear) differential equations is a classic topic. In particular, the theory implies that any singularity $\omega$ of a solution of $\eqref{eq:diff}$ must be a root of the leading polynomial $z^2(16z^2-1)$, meaning $\omega\in\{0,\pm1/4\}$. Since the generating function $W(z)$ has a convergent power series expansion $\eqref{eq:GF}$ at the origin, it only admits (at most) $\pm1/4$ as singularities. To compute the singular expansion of $W$ at these points we use structural results on the solution space of a linear differential equation.

Just as the solutions to the linear recurrence $\eqref{eq:rec}$ form a complex vector space of dimension two, the solutions of $\eqref{eq:diff}$ form a complex vector space of dimension two (again, addition two solutions or multiplying a solution by a complex constant gives another solution). For any fixed point $\omega$ in the complex plane, expansions of a basis of solutions near $\omega$ can be computed using a method tracing back (at least, in this case) to Frobenius.

For instance, the Sage evaluation

# Get expansions at 0 of a basis of solutions for the differential equation
print(diffWalk.local_basis_expansions(0, order = 4))

> [z^(-2) - 4*z^(-1)*log(z) - 8*log(z) - 16*z*log(z) - 8*z, z^(-1), 1 + 2*z]


implies there is a basis of solutions whose expansions at the origin begin \begin{align} A_1(z) &= \left(z^{-2}-8z+\cdots\right) + \log(z)\left(4z^{-1}-8-16z+\cdots\right) \notag \\[+2mm] A_2(z) &= z^{-1} + \cdots \notag \\[+2mm] A_3(z) &= 1 + 2z + \cdots \notag \end{align} Note $A_1(z)$ and $A_2(z)$ are singular at the origin. Since $W$ lies in the span of the $A_j$ and has a convergent power series at the origin, and its first series coefficient matches the first series coefficient of $A_3$, in fact $A_3(z)=W(z)$.

The evaluation

# Get expansions at 1/4 of a basis of solutions for the differential equation
print(diffWalk.local_basis_expansions(1/4, order = 3))

> [log(z - 1/4) - 6*(z - 1/4)*log(z - 1/4) + 31*(z - 1/4)^2*log(z - 1/4), 1 - 14*(z - 1/4)^2, (z - 1/4) - 15/2*(z - 1/4)^2]


implies there is a basis of solutions whose expansions at $1/4$ begin \begin{align} B_1(z) &= \log(z-1/4)\left(1-6(z-1/4)+\cdots\right) \notag \\[+2mm] B_2(z) &= 1 - 14(z - 1/4)^2 + \cdots \notag \\[+2mm] B_3(z) &= (z - 1/4) - (15/2)(z - 1/4)^2 + \cdots \notag \end{align}

We want to compute the singular expansion of $W(z)$ around $z=1/4$. Because we can represent $W$ in the basis $A_1,A_2,A_3$ at the origin, and have expansions of the basis $B_1,B_2,B_3$ at $z=1/4$, to determine the singular expansion around $z=1/4$ it is enough to find the change of basis matrix from the $A_j$ to the $B_j$. This can be done numerically with rigorous error bounds using a process known as numeric analytic continuation (for now refer to [M19] for more information).

Skipping some details, the evaluation

# Represent W in terms of the A_j basis
ini = [0,0,1]

# Find numeric coefficients of W in the B_j basis
bas = diffWalk.numerical_transition_matrix([0,1/4]) * vector(ini)
print(bas)

> ([-1.2732395447351627 +/- 2.54e-17] + [+/- 2.65e-22]*I, [-2.3906971441245563 +/- 1.99e-17] + [4.0000000000000000 +/- 2.78e-17]*I, [12.890661954217663 +/- 2.55e-16] + [-24.000000000000000 +/- 1.12e-16]*I)


shows that $$W(z) = \rho_1 B_1(z) + \rho_2 B_2(z) + \rho_3 B_3(z)$$ where, certified up to 16 decimal places, \begin{align} \rho_1 &\approx -1.2732395447351627 \\[+2mm] \rho_2 &\approx -2.3906971441245563 + 4.0i \notag \\[+2mm] \rho_3 &\approx 12.890661954217663 - 24.0i \notag \end{align} This Sage implementation is extremely efficient, and can rigorously compute the constants up to 1000 decimal places in approximately two seconds.

The singular expansions of the $B_j$ at $z=1/4$ imply that the singular expansion of $W$ at $z=1/4$ begins $$W(z) = \rho_1 \log(1-4z) + \cdots,$$ where the missing series terms do not affect our asymptotic arguments (we are also ignoring some technicalities, like the region around $z=1/4$ where this is valid). In particular, the point $z=1/4$ is a singularity of $W(z)$.

Because the $n$th power series coefficient of $\log(1-4z)$ at the origin is $-4^n/n$, the theory implies this singularity contributes $-\rho_1 4^n/n$ to the dominant asymptotics of $w_n$. Repeating the analysis on the point $z=-1/4$ shows that it is also a singularity, but its asymptotic contribution is on the order $O((-4)^n/n^2)$ and thus negligible.

Putting everything together, we have shown (modulo proving the guessed recurrence for $w_n$ / differential equation for $W$) that $$w_n = (1.2732395\dots)\frac{4^n}{n}\left(1+O\left(\frac{1}{n}\right)\right),$$ where the leading constant can be computed explicitly to thousands of decimal places. The initial digits of the leading constant match our heuristic guess for the connection coefficient $\lambda_1$ above.

### Remarks

In our arguments we used the recurrence $\eqref{eq:rec}$ and differential equation $\eqref{eq:diff}$ as data structures for the sequence $w_n$. These data structures allow us to implicitly work with $w_n$ and computationally obtain the information we want. We saw how to convert between the data structures, and that they are each good for obtaining different information (for instance, the recurrence is used to generate terms but rigorously approximating the connection coefficients is done using the differential equation). As we will later see, the asymptotic growth of $w_n$ implies $W(z)$ is not algebraic, and thus cannot be encoded as the root of a bivariate polynomial.

Unfortunately, determining the connection coefficients rigorously for the solutions of linear recurrence and differential equations is a large open problem in enumerative combinatorics (known as the connection problem). In later posts, we will see how to determine asymptotics for this example exactly using a multivariate representation4 $$W(z) = \Delta\left(\frac{(1+x)(1+y)}{1-txy(1/x+x+1/y+y)}\right)$$ and complex analysis in several variables. We will see that $$w_n \sim \frac{4}{\pi}\cdot\frac{4^n}{n},$$ so the probability that a walk of length $n$ stays in the first quadrant is $4/(\pi n)$. This rational diagonal representation can also be used to rigorously calculate our guessed differential and recurrence relations using the theory of creative telescoping.

Note that the computational approach taken above is extremely general, applying to any integer sequence satisfying a linear recurrence with polynomial coefficients such that enough terms can be computed to guess the recurrence (and be confident in the guess). For this particular lattice path example the OEIS entry for $w_n$ lists an explicit formula,

OE.formulas()[0]

> a(n) = binomial(n, floor(n/2))*binomial(n+1, floor((n+1)/2))


so that near the start of our analysis we could conjecture the explicit formula $$w_n = {n \choose \lfloor n/2\rfloor} {n+1 \choose \lfloor (n+1)/2 \rfloor}.$$ As we will later see, such a conjecture can be proven automatically given a recurrence satisfied by $w_n$. Of course, it’s not reasonable to assume a general problem has a conjectured explicit formula available on the internet, so the computations described above are usually necessary.

### An Exercise for the Reader

Modify the Sage code corresponding to this post to enumerate walks on the steps North, South, East, and West which always stay in the upper half-plane. Perform as much of the above analysis as you can.

### References

[H10] K. Humphreys. A history and a survey of lattice path enumeration. J. Statist. Plann. Inference Vol 140(8), 2237–2254, 2010.

[M19] M. Mezzarobba. Truncation bounds for differentially finite series. Ann. H. Lebesgue Vol 2, 99–148, 2019.

[WZ85] J. Wimp and D. Zeilberger. Resurrecting the asymptotics of linear recurrences. J. Math. Anal. Appl. Vol 111(1), 162–176, 1985.

1. See Humphreys [H10] for a nice historical survey of lattice path enumeration. ↩︎

2. In fact, CoCalc (wisely) kills the kernel after eight minutes as our function uses too much memory (over the 5GB limit on the lowest paid tier). ↩︎

3. Technically, specifying the series of $W(z)$ at the origin only defines a germ of an analytic function, as there are many analytic functions with a given power series converging on a neighbourhood of the origin. This distinction does not matter for the arguments presented here. ↩︎

4. The diagonal operator $\Delta$ means the sequence $w_n$ is obtained by expanding the multivariate rational function as a power series in $x,y,$ and $t$, then taking the terms where all exponents have the same power (the constant term, coefficient of $xyt$, coefficient of $x^2y^2t^2$, etc.). ↩︎