# Some Resummation Theory

Perturbation theory, as mentioned in an earlier post, is a very important part of the study of many fields but a recurring problem is the issue of summing divergent sequences which sometimes arise in a solution. Even some convergent solutions are very hard to sum because we can only calculate the first two or three terms in a reasonable amount of time. As a result, the study of these infinite summation become very important to advance the field. However, there are some basic principles that must be established before we ourselves delve into the analysis of these sequences. The first is the idea of term rearrangement. Look at the following series.

$\sum_{n=1}^\infty&space;\frac{(-1)^{n-1}}{n}=1-\frac{1}{2}+\frac{1}{3}...$

We know that this series is simply equal to $\inline&space;\ln(2)$ from Taylor series but consider the following. Assume we want the series to sum up to $\inline&space;e$. We can make this possible. We can start by summing up the first couple positive terms until the summation surpasses $\inline&space;e$. From there, one can add on the negative values until it goes below $\inline&space;e$. Then we can repeat this process by taking the next couple positive values and this can go on forever. By rearranging the sequence, we set it equal to $\inline&space;e$. In fact, we can make the series equal any value. As a result, we conclude that associativity and commutativity are not applicable properties in infinite summations (Note: this is only true when the series is conditionally convergent because those allow for one to push the summation up and down to whatever length desired). The only property that still holds is distributivity of multiplication which simply asserts that summations preserve linearity.

Let’s investigate 2 interesting processes that allow us to quickly generate an answer.

The Shanks Transformation

One interesting idea is that we can take the first couple partial sums of a sequence and decode hidden information in the numbers in order to quickly approximate the final limit of the partials. Let’s call the partials $\inline&space;\{P_n\}_{n=1}^\infty$ and their limit $\inline&space;L$ where $\inline&space;P_n=\sum_{i=1}^n&space;a_i$ and $\inline&space;P_n&space;\rightarrow&space;L$ as $\inline&space;n&space;\rightarrow&space;\infty$.

First let’s assume all partial sums take on the form $\inline&space;P_n=L+ar^n$. If we require $\inline&space;|r|<1$, the partials will approach the limit as $\inline&space;n$ gets very large. Well if this is the case we can look at what happens at consecutive steps of the partials.

$P_{n-1}=L+ar^{n-1}$

$P_{n}=L+ar^{n}$

$P_{n+1}=L+ar^{n+1}$

This can easily be rearranged into the following.

$P_{n-1}-L=ar^{n-1}$

$P_{n}-L=ar^{n}$

$P_{n+1}-L=ar^{n+1}$

From here, we can divide equations in order to eliminate both $\inline&space;a$ and $\inline&space;r$

$\frac{P_n-L}{P_{n-1}-L}=r=\frac{P_{n+1}-L}{P_n-L}$

Now we solve for the limit of the series.

$(P_n-L)^2=(P_{n+1}-L)(P_{n-1}-L)$

$P_n^2-2LP_n+L^2=P_{n+1}P_{n-1}-LP_{n-1}-LP_{n+1}+L^2$

$L=\frac{P_{n+1}P_{n-1}-P_n^2}{P_{n+1}-2P_n+P_{n-1}}$

This solution however cannot be used to calculate the limit exactly because not all partial sums take the aforementioned form. Alternatively, let us define a sequence transformation. If we take the sequence of partials and apply the above equation to each term, we will get a new sequence without a term for $\inline&space;n=1$ because it needs at least one term before it. Call this new transformed sequence $\inline&space;S(P_n)$. This is called a Shanks transformation. I now claim that this sequence converges faster than the original sequence $\inline&space;P_n$. In fact I claim applying it again creates another sequence $\inline&space;S^{(2)}(P_n)=S(S(P_n))$ that converges even faster and that each successive transformation $\inline&space;S^{(m)}(P_n)$ converges even faster. If this sounds unbelievable, let’s look at an example like $\inline&space;1-\frac{1}{2}+\frac{1}{3}...$ Look at the following chart and graph representing the converging sequences.

As seen from the first 4 transformations, higher orders of transforming result in accelerated convergence. From just the first 9 partials, we were able to determine within accuracy within 8 digits. This kind of accuracy would take hundreds of millions of regular partial summations to achieve. Note: this method really only works for convergent series because we assume $\inline&space;|r|<1$.

Richardson Extrapolation

Here, I present another method of extracting information from a series. Using the same notation as earlier, this process proposes the following model for the convergence of the partials.

$P_n=L+\frac{a_1}{n}+\frac{a_2}{n^2}+\frac{a_3}{n^3}...$

It is easy to see that as $\inline&space;n&space;\rightarrow&space;\infty$, $\inline&space;P_n&space;\rightarrow&space;L$. Taking a first order approximation, we can create the following equations for consecutive partials.

$P_{n}=L+\frac{a_1}{n}$

$P_{n+1}=L+\frac{a_1}{n+1}$

The equations are then rearranged slightly.

$n(P_n-L)=a_1$

$(n+1)(P_{n+1}-L)=a_1$

By setting these equations equal to each other, we can assign a value to the series limit $\inline&space;L$ based on the partials.

$n(P_n-L)=(n+1)(P_{n+1}-L)$

$nP_n-nL=nP_{n+1}+P_{n+1}-nL-L$

$L=(n+1)P_{n+1}-nP_n$

Following a similar process to the Shanks transformation, we can creates another series out of the above equation. We call this sequence $\inline&space;R_1(P_n)$ as it is the first order Richardson extrapolation. Assume we repeat the process above but with a second order approximation. For this, we would need one more partial to accommodate for the second constant.

$P_n=L+\frac{a_1}{n}+\frac{a_2}{n^2}$

$P_{n+1}=L+\frac{a_1}{n+1}+\frac{a_2}{(n+1)^2}$

$P_{n+2}=L+\frac{a_1}{n+2}+\frac{a_2}{(n+2)^2}$

Clever rearrangement (namely taking twice the middle function and subtracting the other two) allows us to create the following equation for the series limit.

$L=\frac{(n+2)^2P_{n+2}-2(n+1)^2P_{n+1}+n^2P_n}{2}$

We can call the series formed by this equation $\inline&space;R_2(P_n)$ In fact, notice that the coefficients are alternating binomial coefficients. In fact, solving for higher order Richardsons reveals that they extrapolations take a very predictable form.

$R_k(P_n)=\sum_{i=0}^k\frac{(-1)^{k-i}}{k!}\binom{k}{i}(n+i)^kP_{n+i}$

Now, just as before, I will make the claim that this extrapolated sequence actually converges faster than the original to the same limit. Let’s test this for the convergent series $\inline&space;1+\frac{1}{4}+\frac{1}{9}...$

Notice how these two only applied for convergent series.

For divergent series, it seems there is no way to assign a value to the sum but, in fact, this notion is misguided. Divergent series are just representations of functions that are ill-defined on some intervals. For example, consider the gamma function. One representation of the function is shown below.

$\Gamma(x)=\left\{\begin{matrix}&space;1&space;&&space;x=1\\&space;(x-1)\Gamma(x-1)&space;&&space;x\neq1&space;\end{matrix}\right.$

This function represented in this form cannot be extended to fractional values but its representation in integral form can.

$\Gamma(x)=\int_0^\infty&space;t^{x-1}e^{-t}\textup{d}t$

To say this function cannot be defined be for fractional values of $\inline&space;x$ by looking at the first representation would simply be misguided. In fact, this integral cannot take on negative values for $\inline&space;x$ but other representations can. In the same way, divergent series represented in other forms can help us find a way to determine their true value.

Note that two things must be considered at this point. This is not to say that there are no divergent series because series like $\inline&space;1+1+1...$ are truly divergent (which we will prove later). Also, some may point out the mathematical ignorance of this perspective claiming that a function’s values IS based on its representation but because this kind of math is primarily used in physics, our notion holds. If we get divergent series out of the physical world, that is just a flaw with our process of describing the system because there is usually an alternate convergent description. Our formulation is not necessarily the identity of the function.

From here, we will discuss three approaches to divergent summations and then hint at one of the biggest results of resummation theory.

Euler Summation

This one is simple. Assume you have a sequence $\inline&space;\{a_n\}_{n=1}^{\infty}$ where $\inline&space;\sum_{n=1}^Na_n\rightarrow\infty$. Euler suggests taking the Taylor series $\inline&space;f(x)=\sum_{n=0}^\infty&space;a_nx^n$ and then taking the limit $\inline&space;\lim_{x\rightarrow&space;1}f(x)$. For example, consider the following sequence.

$1+2+4+8&space;...&space;=\sum_{n=0}^\infty2^n$

$\lim_{x\rightarrow1}\sum_{n=0}^\infty&space;(2x)^n=\lim_{x\rightarrow&space;1}\frac{1}{1-2x}=-1$

Borel Summation

Consider the following identity rearranged below.

$\int_0^\infty&space;t^xe^{-t}\textup{d}t=x!$

$\frac{\int_0^\infty&space;t^xe^{-t}\textup{d}t}{x!}=1$

Now we can take the summation and multiply it by $\inline&space;1$.

$\sum_{n=0}^\infty&space;a_n=\sum_{n=0}^\infty&space;a_n\frac{\int_0^\infty&space;t^{n}e^{-t}\textup{d}t}{n!}$

Because we are looking at perturbation theory in application to physics, we assume we are dealing with nice functions where the sum of the integrals is the integral of the sums.

$\sum_{n=0}^\infty&space;a_n&space;\frac{\int_0^\infty&space;t^{n}e^{-t}\textup{d}t}{n!}=\int_0^\infty&space;e^{-t}\left(\sum_{n=0}^\infty&space;a_n\frac{t^n}{n!}\right)\textup{d}t$

This is much more useful because $\inline&space;n!$ in denominator makes it much more likely for the sum to converge and, as a result, to integrate. Observe the following sequence using Borel Summation.

$1-1+1-1+1...=\sum_{n=0}^\infty&space;(-1)^n$

$\sum_{n=0}^\infty&space;(-1)^n=\int_0^\infty&space;e^{-t}\sum_{n=0}^\infty&space;\frac{(-t)^n}{n!}\textup{d}t=\int_0^\infty&space;e^{-t}*e^{-t}\textup{d}t=\frac{1}{2}$
General Summations

All these summations are great but to convince you that these are indeed well-defined processes, allow me to lay out these general summation properties which the aforementioned processes follow (I won’t prove this in this post). In fact these properties alone are sufficient to calculate the values of series.

Let us define a function $\inline&space;\mathcal{S}[a_n]$ that takes in a sequence of numbers and spits out a value equal to their sum. Well, for this function to behave like a sum, the following must be true.

$\mathcal{S}[a_0,&space;a_1,...,a_n]=a_0+\mathcal{S}[a_1,&space;a_2,&space;...,&space;a_n]$

It also must be linear.

$\mathcal{S}[\alpha&space;a_n+\beta&space;b_n]=\alpha&space;\mathcal{S}[a_n]+\beta&space;\mathcal{S}[b_n]$

These properties alone allow for a vast range of results which I will allow the reader to experiment with. For now, I will prove as promised the divergence of the series $\inline&space;1+1+1...$By using the first property, we know the following.

$\mathcal{S}[1,1,1,...]=1+\mathcal{S}[1,1,1,...]$

$L=1+L$

There exists no finite value of $\inline&space;L$ that satisfies the above equation so it must be infinite. Hence, it diverges.

Now all this seems amazing but many of these assume we know knowledge about all the terms in a summation. If we are only left with the first couple terms of a sequence (which is often the case in perturbation theory), these processes fail miserably. This dilemma however will be solved in the next post which discusses Pade approximants. It acts as an extension of Taylor series spanning all forms of rational functions instead of just polynomials but we will discuss this idea more later.

If you want to know more or see where I learned it from, watch the lectures on Mathematical Physics by Carl Bender.