← cgad.ski 2023-05-21

The Central Binomial Coefficient and Laplace's Method

The central binomial coefficient is the sequence down the "center" of Pascal's triangle, namely (2nn).\binom{2n}{n}. These numbers show up, for example, in a random walk on the integers: if we start off at 00 and make 2n2n independent increments of +1+1 or 1-1, giving each choice even odds, then we will find ourselves back at the origin with probability 22n(2nn).2^{-2n} \binom{2n}{n}. Some basic Markov process reasoning tells us that a random walk like this one can either be recurrent or dissipative: trajectories will either visit each position infinitely many times, or escape to infinity, spending only a finite number of steps at each position. Now, let {X0,X1,}\{ X_0, X_1, \dots \} be our trajectory and CC be the number of times that Xn=0X_n = 0. Since E(C)=n=0P(Xn=0)=n=0P(X2n=0)=n=122n(2nn),\begin{align*} & E(C) = \sum_{n = 0}^\infty P(X_n = 0) \\ & = \sum_{n = 0}^\infty P(X_{2n} = 0) = \sum_{n = 1}^{\infty} 2^{-2n} \binom{2n}{n}, \end{align*} the question of whether the random walk we have described is recurrent comes down to whether the central binomial coefficient grows quickly enough to make this series diverge. Does it? We can also consider the random walk on Zd\Z^d, moving at each step by an increment in the set {1,1}d\{ -1, 1 \}^d, in which case the expected number of visits to the origin is E(C)=n=022nd(2nn)d.E(C) = \sum_{n = 0}^{\infty} 2^{-2nd} \binom{2n}{n}^d . When are these processes recurrent? To answer either question, we need to figure out something about the asymptotic growth of central binomial coefficient.

A basic "statistical intuition" tells us that (2nn)\binom{2n}{n} may on the order of 22n/n2^{2n}/\sqrt{n}. To see why, choose 0<ϵ<10 < \epsilon < 1 and put F(n,ϵ)=P(Xn<ϵ)F(n, \epsilon) = P(\lvert X_n \rvert < \epsilon) so that F(2n,ϵ)=P(X2n=0)=22n(2nn).F(2n, \epsilon) = P(X_{2n} = 0) = 2^{-2n} \binom{2n}{n}. Meanwhile, the central limit theorem tells us that F(n,nϵ)F(n, \sqrt{n} \epsilon) converges to P(Zϵ)P(Z \le \epsilon) for large nn, where ZZ is normally distributed with the variance of an increment—in our case, V(Z)=1V(Z) = 1. Since the density of ZZ at 00 is 1/2π1/\sqrt{2 \pi}, we may expect that F(n,ϵ)P(Zϵn)ϵ2πn.F(n, \epsilon) \approx P\left( Z \le \frac{\epsilon}{\sqrt{n}} \right) \sim \frac{\epsilon}{\sqrt{2 \pi n}}. Unfortunately, it seems quite difficult to get anything rigorous from this idea. In fact, the closer I look at the approximation above, the more I despair: not only does the expression ϵ/2πn\epsilon/\sqrt{2 \pi n} depend on ϵ\epsilon while F(n,ϵ)F(n, \epsilon) does not for ϵ<1,\epsilon < 1, but in fact F(n,ϵ)F(n, \epsilon) cannot be asymptotic to C/nC/\sqrt{n} for any constant CC for the simple reason that F(n,ϵ)=0F(n, \epsilon) = 0 for odd nn.

Although the central limit theorem is useless to make rigorous conclusions here—as far as I can tell—it "miraculously" gives us the right order of growth. In fact, it is true that (2nn)22nπn.\binom{2n}{n} \sim \frac{2^{2n}}{\sqrt{\pi n}}. Plugging this approximation into the sums above gives the classic result that the random walk on Zd\Z^d is recurrent for d2d \le 2 but dissipative for all larger dd. Moreover, an asymptotic series can be computed for the relative error, giving (for example) the remarkably accurate approximation (2nn)=22nπn(118n1+1128n2+51024n3+o(n3))\binom{2n}{n} = \frac{2^{2n}}{\sqrt{\pi n}} \left( 1 -\frac{1}{8} n^{-1} +\frac{1}{128} n^{-2} +\frac{5}{1024} n^{-3} + o(n^{-3}) \right) which gives a maximum relative error of around 10810^{-8}. (You may play the "guess the sequence" game with those coefficients at your own risk.)

These estimates can be derived from Stirling's approximation for n!n!, but in this post I want to focus on a simple idea from which various asymptotic series—including the Stirling series—can be derived independently.

Laplace's Method

Our asymptotic expansion for the central binomial coefficient will rely on approximating a certain exponential integral. As a warmup, let's consider the problem of approximating a Laplace transform-type integral I(x)=0f(t)extdtI(x) = \int_0^\infty f(t) e^{-x t} \, dt for large xx. Assuming that f(t)f(t) does not grow too quickly for large tt, our integral will vanish exponentially over the interval [δ,+)[\delta, +\infty), so the behavior of I(x)I(x) comes down to the behavior of ff near 00. If ff is also bounded near 0,0, then I(x)I(x) tends to 00 overall since 0extdx=1x.\int_0^\infty e^{-xt} \, dx = \frac{1}{x}. Cancelling out this term, xI(x)x I(x) can be understood as an expectation of ff over an exponential distribution with mean 1/x1/x. Indeed, where YY is exponentially distributed with density ete^{-t}, xI(x)=0f(t)xextdt=0f(x1t)etdt=E(f(x1Y)).x I(x) = \int_0^\infty f(t) x e^{-x t} \, dt = \int_0^\infty f( x^{-1} t) e^{-t} \, dt = E(f(x^{-1} Y)). When f(t)=tk,f(t) = t^k, this expectation is easy to compute: E(xkYk)=xkE(Yk)=xkk!.E(x^{-k} Y^k) = x^{-k} E(Y^k) = x^{-k} k!. If ff admits a Taylor series to order nn with remainder rn(t)=o(tn)r_n(t) = o(t^n), I(x)=1x(k=0nE(f(k)(0)k!xkYk)+E(rn(x1Y)))=k=0nf(k)(0)xk+1+1xE(rn(x1Y)).\begin{align*} I(x) & = \frac{1}{x}\left( \sum_{k = 0}^n E\left( \frac{f^{(k)}(0)}{k!} x^{-k} Y^k \right) + E(r_n(x ^{-1} Y) )\right) \\ & = \sum_{k = 0}^n \frac{f^{(k)}(0)}{x^{k + 1}} + \frac{1}{x} E(r_n(x^{-1} Y)). \end{align*} All that remains is to bound the expectation of the remainder. We know that rn(t)<ϵtn\lvert r_n(t) \lvert < \epsilon \lvert t \rvert^n for sufficiently small t<δt < \delta, so let's consider the events Y<xδY < x \delta and YxδY \ge x \delta separately—call then SS and LL. We will decompose the expectation into integrals over these two sets: E(rn(x1Y))=ES(rn(x1Y))+EL(rn(x1Y)).E(r_n(x ^{-1} Y)) = E_S(r_n(x ^{-1} Y)) + E_L(r_n(x ^{-1} Y)). For the expectation over LL, we only need a bound like rn(t)CtK\lvert r_n(t) \rvert \le C t^K for some CC and KK, since then EL(rn(x1Y))δxrn(x1t)etdtδxCxKtKetdt\lvert E_L(r_n(x ^{-1} Y)) \rvert \le \int_{\delta x}^\infty \lvert r_n(x ^{-1} t) \rvert e^{-t}\, dt \le \int_{\delta x}^\infty C x^{-K} t^K e^{-t}\, dt is guaranteed to vanish exponentially for large x,x, as we had mentioned above. (We call this kind of error beyond all orders, since it is dominated by xnx^{-n} for all nn.) Meanwhile, to bound the expectation over SS we can use the bound on rnr_n: ES(rn(x1Y))ϵES(xnYn)ϵE(Yn)xn.\lvert E_S(r_n(x ^{-1} Y))\rvert \le \epsilon E_S(x^{-n} Y^n) \le \epsilon \frac{E(Y^n)}{x^n}. We conclude overall that E(rn(x1Y))=o(xn),E(r_n(x ^{-1} Y)) = o(x^{-n}), and so the whole approximation is I(x)=k=0nf(k)(0)xk+1+o(xn1).I(x) = \sum_{k = 0}^n \frac{f^{(k)}(0)}{x^{k + 1}} + o(x^{-n-1}). The reasoning above can naturally be generalized to any variable YY with quickly decaying tails.

Proposition: Suppose f(t)f(t) is of polynomial order for large tt and verifies f(t)=k=0naktk+o(tn)f(t) = \sum_{k = 0}^n a_k t^k + o(t^n) for small tt. Then, for any variable YY with exponentially bounded tails in the sense that P(YM)CeδMP(\lvert Y \rvert \ge M) \le C e^{-\delta M} for some C>0C > 0, δ>0\delta > 0, we also have E(f(tY))=k=0nE(Yk)aktk+o(tn).E(f(t Y)) = \sum_{k = 0}^n E(Y^k) a_k t^k + o(t^{n}).

This idea comes in comes in handy in the general task of approximating an integral like I(x)=Df(t)exϕ(t)dtI(x) = \int_D f(t) e^{-x \phi(t)} \, dt for large xx. Again, if ff does not grow quickly for large tt, this integral is determined (up to an exponentially vanishing error) by the integrand near the minimizers of ϕ.\phi. For an isolated minimum at t0t_0 with ϕ(t)=(tt0)k+O((tt0)k+1),\phi(t) = (t - t_0)^k + O((t - t_0)^{k + 1}), changing variables to ϕ(t)=(st0)k\phi(t) = (s - t_0)^k in a neighborhood of t0t_0 gives us an integral we can evaluate with the proposition above. This technique is known as Laplace's method. We'll see an example of this now.

Estimating the Central Binomial Coefficient

Let's return to our random walk on the integers. The characteristic function of our increment is eit+eit2=cos(t),\frac{e^{-i t} + e^{i t}}{2} = \cos(t), and so cosn(t)\cos^n(t) will be the characteristic function of XnX_n. Since XnX_n is supported on the integers, the characteristic function is a Fourier polynomial and we can extract probability mass functions easily; for example, P(Xn=0)=12πππcosn(θ)dθ.P(X_n = 0) = \frac{1}{2 \pi} \int_{-\pi}^\pi \cos^n(\theta) \, d \theta. This integral vanishes exponentially except near the points θ=0\theta = 0 and θ=±π.\theta = \pm \pi. For odd nn these two peaks cancel out—indeed, P(Xn=0)=0P(X_n = 0) = 0 for odd nn—and for even nn they are equal. We conclude that P(X2n=0)1πϵϵcos2n(θ)dθP(X_{2n} = 0) \approx \frac{1}{\pi} \int_{-\epsilon}^\epsilon \cos^{2n}(\theta) \, d\theta for ϵ<π\epsilon < \pi, ignoring an error beyond all orders of nn.

Since lncos(θ)θ2/2\ln \cos(\theta) \approx -\theta^2/2, we can reparameterize by a variable ss solving the equation lncos(θ)=s22.\ln \cos(\theta) = -\frac{s^2}{2}. Near θ=s=0\theta = s = 0, the solutions to this equation are a union of two smooth curves. We can choose one branch by defining s=θ2lncos(θ)θ2=θ1+O(θ2).s = \theta \sqrt{\frac{-2 \ln \cos(\theta)}{\theta^2}} = \theta\sqrt{1 + O(\theta^2)}. We can now change variables from θ\theta to ss and write our integral as a certain expectation over a Gaussian variable: P(X2n=0)1πϵϵexp(2nlncos(θ))dθ=1πϵϵdθdsexp(ns2)ds.=1π2n2nϵ2nϵdθds(s2n)exp(s2/2)ds=1πnE(dθds((2n)1/2Z)).\begin{align*} P(X_{2n} = 0) & \approx \frac{1}{\pi}\int_{-\epsilon}^\epsilon \exp(2n \ln \cos(\theta)) \, d\theta \\ & = \frac{1}{\pi}\int_{-\epsilon}^\epsilon \frac{d\theta}{ds} \exp(-n s^2)\, ds. \\ & = \frac{1}{\pi \sqrt{2n}} \int_{-\sqrt{2n} \epsilon}^{\sqrt{2n} \epsilon} \frac{d\theta}{ds}\left( \frac{s}{\sqrt{2n}} \right) \exp(-s^2/2) \, ds \\ & = \frac{1}{\sqrt{\pi n}} E\left( \frac{d\theta}{ds}((2n)^{-1/2} Z) \right). \end{align*} (In the last expectation, we're defining dθ/ds(s)=0d\theta/ds(s) = 0 for sϵ\lvert s \rvert \ge \epsilon as a convenience of notation.) Our leading order approximation for P(X2n=0)P(X_{2n} = 0) is thus P(X2n=0)=1πn+o(n3/2),P(X_{2n} = 0) = \frac{1}{\sqrt{\pi n}} + o(n^{-3/2}), and in particular we have an asymptotic equivalence P(X2n=0)1πn.P(X_{2n} = 0) \sim \frac{1}{\sqrt{\pi n}}. In terms of the central binomial coefficient, this proves (2nn)=22nP(X2n=0)22nπn,\binom{2n}{n} = 2^{2n} P(X_{2n} = 0) \sim \frac{2^{2n}}{\sqrt{\pi n}}, as promised. The full asymptotic expansion will be (2nn)22nπnE(dθds((2n)1/2Z))=22nπn(k=0Ndθk+1dsk+1(0)E(Zk)k!(2n)k/2+o(nN/2)).\begin{align*} \binom{2n}{n} \approx \frac{2^{2n}}{\sqrt{\pi n}}E\left( \frac{d\theta}{ds}((2n)^{-1/2} Z) \right) = \frac{2^{2n}}{\sqrt{\pi n}} \left( \sum_{k = 0}^N \frac{d\theta^{k + 1}}{ds^{k + 1}}(0) \frac{E(Z^k)}{k! (2n)^{k/2}} + o(n^{-N/2})\right). \end{align*} Since the odd moments of ZZ vanish and the even moments are E(Zk)=(k1)!!E(Z^k) = (k - 1)!! for k>0k > 0, we can rewrite this as 22nπn(1+k=1Ndθ2k+1ds2k+1(0)1(2k)!!2knk+o(nN)).\frac{2^{2n}}{\sqrt{\pi n}} \left( 1 + \sum_{k = 1}^N \frac{d\theta^{2k + 1}}{ds^{2k + 1}}(0) \frac{1}{(2k)!! 2^k} n^{-k} + o(n^{-N}) \right). All that remains is to compute the series expansion of θ\theta in terms of ss. As far as I know, there is no simple closed-form for this series. Fortunately, if we happen to want coefficients, these computations can be easily done by a computer algebra system. Expanding s=θ2lncos(θ)θ2=θ+θ312+3θ5160+209θ740320+O(θ9)s = \theta \sqrt{\frac{-2 \ln \cos(\theta)}{\theta^2}} = \theta +\frac{\theta ^3}{12}+\frac{3 \theta ^5}{160}+\frac{209 \theta ^7}{40320}+O\left(\theta^9\right) and inverting the series gives θ=ss312+s5480+s72688+O(s9)=s12s33!+14s55!+158s77!+O(s9).\begin{align*} \theta & = s-\frac{s^3}{12}+\frac{s^5}{480}+\frac{s^7}{2688}+O\left(s^9\right) \\ & = s - \frac{1}{2} \frac{s^3}{3!} + \frac{1}{4} \frac{s^5}{5!} + \frac{15}{8} \frac{s^7}{7!} + O(s^9). \end{align*} Altogether, we have (2nn)=22nπn(11212!!×2n1+1414!!×4n2+81516!!×8n3+o(n3))=22nπn(118n1+1128n2+51024n3+o(n3)),\begin{align*} \binom{2n}{n} & = \frac{2^{2n}}{\sqrt{\pi n}} \left( 1 -\frac{1}{2} \frac{1}{2!! \times 2} n^{-1} + \frac{1}{4} \frac{1}{4!! \times 4} n^{-2}+ \frac{8}{15} \frac{1}{6!! \times 8} n^{-3} + o(n^{-3}) \right) \\ & = \frac{2^{2n}}{\sqrt{\pi n}} \left( 1 -\frac{1}{8} n^{-1} + \frac{1}{128} n^{-2}+ \frac{5}{1024} n^{-3} + o(n^{-3}) \right), \end{align*} as claimed above.

← cgad.ski