An Analysis of The Lanczos Gamma Approximation
An Analysis of The Lanczos Gamma Approximation
APPROXIMATION
by
DOCTOR OF PHILOSOPHY
in
Department of Mathematics
..................................
..................................
..................................
..................................
..................................
November 2004
(Signature)
Department of Mathematics
The University of British Columbia
Vancouver, Canada
Date
Abstract
This thesis is an analysis of C. Lanczos’ approximation of the classical
gamma function Γ(z +1) as given in his 1964 paper A Precision Approximation
of the Gamma Function [14]. The purposes of this study are:
(i) to explain the details of Lanczos’ paper, including proofs of all claims
made by the author;
(ii) to address the question of how best to implement the approximation
method in practice; and
(iii) to generalize the methods used in the derivation of the approximation.
where sw,n (z) denotes a series of n + 1 terms and ǫw,n (z) a relative error to
be estimated. The real variable w is a free parameter which can be adjusted
to control the accuracy of the approximation. Lanczos’ method stands apart
from the other two in that, with w ≥ 0 fixed, as n → ∞ the series sw,n (z)
converges while ǫw,n (z) → 0 uniformly on Re(z) > −w. Stirling’s and Spouge’s
methods do not share this property.
What is new here is a simple empirical method for bounding the relative
error |ǫw,n (z)| in the right half plane based on the behaviour of this function
as |z| → ∞. This method is used to produce pairs (n, w) which give formu-
las (1) which, in the case of a uniformly bounded error, are more efficient than
Stirling’s and Spouge’s methods at comparable accuracy. In the n = 0 case,
a variation of Stirling’s formula is given which has an empirically determined
uniform error bound of 0.006 on Re(z) ≥ 0. Another result is a proof of the
limit formula
z 1 −12 /r z −22 /r z(z − 1)
Γ(z + 1) = 2 lim r −e +e +···
r→∞ 2 z+1 (z + 1)(z + 2)
ii
Table of Contents
Abstract ii
List of Tables vi
Acknowledgments viii
Chapter 1. Introduction 1
1.1 Lanczos and His Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
iii
Table of Contents
iv
Table of Contents
Bibliography 145
v
List of Tables
1.1 Relative decrease of ak (r), r = 1, 4, 7 . . . . . . . . . . . . . . . . . . . . . . . . . 4
vi
List of Figures
1.1 Cornelius Lanczos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Relative decrease of ak (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Relative decrease of ak (4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Relative decrease of ak (7) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Lanczos shelf, r = 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Lanczos shelf, r = 40 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Lanczos shelf, r = 60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.8 fr (θ), −π/2 ≤ θ ≤ π/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.1 v(x)/e, −1 ≤ x ≤ 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
√
4.2 2 fE,r (x)/er , −1 ≤ x ≤ 1, r = 0, 1, 2, 4 . . . . . . . . . . . . . . . . . . . . 55
11.1 ǫr,n (z), −0.5 < r < 3, −0.5 < z < 3 . . . . . . . . . . . . . . . . . . . . . . . 142
11.2 Linear fit of (n, − log Mr(n),n ), n = 0, . . . , 60 . . . . . . . . . . . . . . . 144
vii
Acknowledgments
I would like to thank all those who have assisted, guided and sup-
ported me in my studies leading to this thesis. In particular, I wish
to thank my supervisor Dr. Bill Casselman for his patience, guidance
and encouragement, my committee members Dr. David Boyd and Dr.
Richard Froese, and Ms. Lee Yupitun for her assistance as I found my
way through the administrative maze of graduate school. I would also
like to thank Professor Wesley Doggett of North Carolina State Uni-
versity for his kind assistance in obtaining permission to reproduce the
photo of C.Lanczos in Chapter 1.
The online computing community has been a great help to me also
in making available on the web many useful tools which helped in my
research. In particular, MPJAVA, a multiple precision floating point
computation package in Java produced by The HARPOON Project
at the University of North Carolina was used for many of the high
precision numerical investigations. In addition, the online spreadsheet
MathSheet developed by Dr. Bill Casselman and Dr. David Austin was
very helpful in producing many of the PostScript plots in the thesis.
Finally, I would like to thank my family for their support and en-
couragement: my parents Anne and Charles, and especially my wife
Jacqueline for her encouragement and many years of understanding
during the course of my studies.
viii
Chapter 1
Introduction
The focus of this study is the little known Lanczos approximation [14]
for computing the gamma function Γ(z + 1) with complex argument z.
The aim is to address the following questions:
(ii) How are the parameters of the method best chosen to approximate
Γ(z + 1)?
1
Chapter 1. Introduction
where
1 z z(z − 1)
Sr (z) = a0 (r) + a1 (r) + a2 (r) +··· . (1.2)
2 z+1 (z + 1)(z + 2)
2
Chapter 1. Introduction
as to why it should be true, nor how one should go about selecting trun-
cation orders of the series Sr (z) or values of the parameter r 1 . Despite
this mention, few of the interesting properties of the method have been
explored. For example, unlike divergent asymptotic formulas such as
Stirling’s series, the series Sr (z) converges. Yet, in a manner similar to
a divergent series, the coefficients ak (r) initially decrease quite rapidly,
followed by slower decay as k increases. Furthermore, with increasing
r, the region of convergence of the series extends further and further to
the left of the complex plane, including z on the negative real axis and
not an integer.
3
Chapter 1. Introduction
resents an infinite family of formulas, one for each value of r, and the
choice of this parameter is quite crucial in determining the accuracy
of the formula when the series Sr (z) is truncated at a finite number
of terms. It is precisely the peculiar r dependency of the coefficients
ak (r), and hence Sr (z) itself, which makes Lanczos’ method interesting.
To get a sense of the r parameter’s influence on Sr (z) at this early
stage, consider the ratios |ak+1 (r)/ak (r)| for sample values r = 1, 4, 7
in Table 1.1. Observe how for r = 1, the relative size of successive
4
Chapter 1. Introduction
12
−4
0 k 29
12
−4
0 k 29
5
Chapter 1. Introduction
12
−4
0 k 29
6
Chapter 1. Introduction
300
0
0 k 100
300
0
0 k 100
7
Chapter 1. Introduction
300
0
0 k 100
8
Chapter 1. Introduction
ments.
The thesis is organized into eleven chapters, which fall into six more
or less distinct categories. These are
9
Chapter 1. Introduction
Stirling’s asymptotic series forms the basis for most computational al-
gorithms for the gamma function and much has been written on its
implementation. An example on the use of Stirling’s series is given
with a short discussion of error bounds.
The second computational method noted is the relatively recent
formula of Spouge [27]. This method is similar in form to Lanczos’
formula but differs greatly in its origin. The formula takes the form
N
" #
X c k
Γ(z + 1) = (z + a)z+1/2 e−(z+a) (2π)1/2 c0 + + ǫ(z)
k=1
z + k
10
Chapter 1. Introduction
π/2
"√ # (1.3)
2 v(θ)r sin θ
Z
× cos2z θ dθ .
−π/2 log v(θ)
Integrating this series against cos2z θ term by term then gives rise to
the series Sr (z) of (1.2) with the help of a handy trigonometric integral.
The derivation of (1.1) can also be carried out using Chebyshev se-
ries instead of Fourier series (really one and the same), which seems
closer in spirit to methods seen later for computing the coefficients
ak (r). In fact, both the Fourier and Chebyshev methods can be gen-
eralized to a simple inner product using Hilbert space techniques, and
this is noted as well.
From the theory of Fourier series, the smoothness of fE,r (θ) governs
the rate of asymptotic decrease of the coefficients ak (r), which in turn
determines the region of convergence of the expansion. This funda-
mental relationship motivates a detailed examination of the properties
of v(θ), fr (θ) and fE,r (θ) in Chapter 4. Practical formulas for these
functions are found in terms of Lambert W functions, and a few rep-
resentative graphs are plotted for several values of r.
Chapter 5 addresses the convergence of the series Sr (z). Once the
growth order of the ak (r) is established, a bound on the functions
(
1 if k = 0,
Hk (z) = z···(z−k+1) (1.4)
(z+1)···(z+k)
if k ≥ 1.
11
Chapter 1. Introduction
= cos (2kθ) .
Then (1.5) becomes
π/2 k
2
Z X
ak (r) = fE,r (θ) C2j,2k cos2j θ dθ
π −π/2 j=0
k
2X
= C2j,2k Fr (j) ,
π j=0
12
Chapter 1. Introduction
Lanczos concludes his paper with the following formula which he claims
is the limiting behaviour of (1.1), and which defines the gamma function
in the entire complex plane away from the negative integers:
" ∞
#
1 X 2
Γ(z + 1) = 2 lim r z + (−1)k e−k /r Hk (z) . (1.6)
r→∞ 2 k=1
The Hk (z) functions here are as defined by (1.4). He gives no proof, nor
any explanation of why this should be true. In Chapter 7 the limit (1.6)
is proved in detail, a result termed the “Lanczos Limit Formula” in this
work.
The proof is motivated by the plots in Figure 1.8 of the function
fr (θ) from equation (1.3). Notice that as r increases, the area un-
√ 2
2e r=2
fr (θ)
√
2e r=1
√
2 r=0
− π2 0
π
2
13
Chapter 1. Introduction
A comparison of (1.6) with the main formula (1.1) suggests that for
large r, r
πr ak (er) 2
er
∼ (−1)k e−k /r , (1.7)
2 e
and plots of these coefficients for various r values supports this rela-
tionship. From the smoothness analysis of fr (θ) it follows that ak (r) =
O k −⌈2r⌉ . Yet, (1.7) suggests a much more rapid decrease of the coef-
ficients for large fixed r and increasing k. Indeed, in his paper, Lanczos
states
14
Chapter 1. Introduction
Error Discussion
Following Lanczos, the function ǫr,n (z) is considered the relative error,
which differs slightly from the standard definition, but which matters
little when it comes to bounding the error. Lanczos gives, for Re(z) ≥ 0,
uniform bounds on |ǫr,n (z)| for seven combinations of n and r values
but does not provide precise details as to how he obtains these figures.
Attempts to reproduce his estimates based on his description were not
successful, and no examination of his estimates appears elsewhere in
the literature. Lanczos’ figures do appear correct, however, based on
the new estimation method which follows.
In order to bound |ǫr,n (z)|, observe that ǫr,n (z) is an analytic func-
tion on Re(z) ≥ 0. It turns out that
n
a0 (r) X
lim ǫr,n (z) = 1 − − ak (r) ,
|z|→∞ 2
k=1
15
Chapter 1. Introduction
16
Chapter 1. Introduction
17
Chapter 1. Introduction
√ Γ(z + 1/2) 1
z z(z − 1)
F (z) = π a0 + a1 + a2 +··· .
Γ(z + 1) 2 z+1 (z + 1)(z + 2)
The ak in the series are the Fourier coefficients of g(θ), and these are
given by
k
2X
ak = C2j,2k F (j) ,
π j=0
18
Chapter 1. Introduction
19
Chapter 2
A Primer on the Gamma
Function
This chapter gives a brief overview of the history of the gamma func-
tion, followed by the definition which serves as the starting point of
Lanczos’ work. A summary of well known results and identities involv-
ing the gamma function is then given, concluding with a survey of two
computational methods with examples: Stirling’s series and Spouge’s
method.
The introductory material dealing with the definition and resulting
identities is standard and may be safely skipped by readers already
familiar with the gamma function. The computational aspects of the
function, on the other hand, may be less familiar and readers may
therefore wish to include the material starting with Section 2.5.
20
Chapter 2. A Primer on the Gamma Function
Once some familiarity with this new object is acquired (and students
are convinced that 0! should indeed equal 1), it is generally applied to
counting arguments in combinatorics and probability. This is the end
of the line for some students. Others continue on to study calculus,
where n! is again encountered in the freshman year, typically in the
result
dn n
x = n!
dxn
for n a non-negative integer. Closely related to this result is the ap-
pearance of n! in Taylor’s formula. Some are asked to prove
Z ∞
n! = tn e−t dt (2.1)
0
as homework, while a fortunate few learn how (2.1) “makes sense” for
non-integer n. Some even go so far as to prove
√
n! ∼ 2πn nn e−n as n → ∞ .
21
Chapter 2. A Primer on the Gamma Function
far, a precise definition of the function, and notation for it (save the
n! used) has been avoided. The gamma function is a generalization of
n!, but the question remains: what definition generalizes the factorial
function in the most natural way?
22
Chapter 2. A Primer on the Gamma Function
The product (2.2) converges for any x not a negative integer. The
integral (2.3) converges for real x > −1. The substitution u = − log t
puts (2.3) into the more familiar form
Z ∞
x! = ux e−u du .
0
Lanczos’ was not the only voice of objection. Edwards, in his well-
known treatise on the Riemann zeta function [8], reverts to Gauss’
notation Π(s) = s!, stating:
23
Chapter 2. A Primer on the Gamma Function
sense, though Mellin was not born until some twenty one years after
Legendre’s death. An interesting recent investigation into the history
of the Γ notation appears in the paper by Gronau [10].
The question remains, are Euler’s representations (2.2) and (2.3)
the “natural” generalizations of the factorial? These expressions are
equivalent for x > −1 (see [16]), so consider the integral form (2.3),
or equivalently, (2.4). Some compelling evidence in support of (2.4)
comes in the form of the complete characterization of Γ(x) by the Bohr-
Mollerup theorem (see [23]):
Theorem 2.1. Suppose f : (0, ∞) → (0, ∞) is such that f (1) = 1,
f (x + 1) = xf (x), and f is log-convex. Then f = Γ.
24
Chapter 2. A Primer on the Gamma Function
2.3 Definition of Γ
As noted, there are several equivalent ways to define the gamma func-
tion. Among these, the one which served as the starting point for
Lanczos’ paper [14] will be used:
Definition 2.1. For z ∈ C with Re(z) > −1, the gamma function is
defined by the integral
Z ∞
Γ(z + 1) = tz e−t dt . (2.5)
0
(iii) For Re(z) > 0 and any integer n ≥ 0, the fundamental recursion
Γ(z + 1) = zΓ(z) implies
Γ(z + n + 1)
Γ(z) = Qn . (2.6)
k=0 (z + k)
25
Chapter 2. A Primer on the Gamma Function
Γ(z + n + 1)
lim (z + n)Γ(z) = lim (z + n) Qn
k=0 (z + k)
z→−n z→−n
Γ(1)
= Qn−1
k=0 (−n + k)
1
= Qn−1
k=0 (−n + k)
(−1)n
= .
n!
26
Chapter 2. A Primer on the Gamma Function
Γ(z + 1)
3
−2 2 z
−3
27
Chapter 2. A Primer on the Gamma Function
0
2
−2
0
0
2 y
x −2
28
Chapter 2. A Primer on the Gamma Function
29
Chapter 2. A Primer on the Gamma Function
The function B2n (x) is the 2nth Bernoulli polynomial defined over R by
periodic extension of its values on [0, 1].
By judiciously selecting n and N the absolute error in log Γ(z + 1),
that is, the absolute value of the integral term in (2.11), can be made as
small as desired, and hence the same can be achieved with the relative
error of Γ(z + 1) itself. Equation (2.11) is valid in the so-called slit
plane C \ {t ∈ R | t ≤ −N}.
Equation (2.11) with N = 0 is derived using Euler-Maclaurin sum-
mation. Alternatively, again with N = 0, begin with the formula of
Binet obtained from an application of Plana’s theorem,
Z ∞
1 arctan (t/z)
log Γ(z + 1) = (z + 1/2) log z − z + log 2π + 2 dt ,
2 0 e2πt − 1
and expand arctan (t/z) in the last term into a Taylor polynomial
with remainder. The resulting series of integrals produce the terms
of (2.11) containing the Bernoulli numbers [30, pp.251–253]. The gen-
eral form (2.11) with N ≥ 1 is obtained by replacing z with z + N and
applying the fundamental recursion.
Exponentiation of Stirling’s series (with N = 0) yields Stirling’s
asymptotic formula for Γ(z + 1) itself:
−z z+1/2 1/2 1 1
Γ(z + 1) = e z (2π) 1+ + +··· . (2.12)
12z 288z 2
Equation (2.12) also follows from applying Laplace’s method (or equiva-
lently, the saddle point method) to (2.5). Truncating the series in (2.12)
after the constant term yields the familiar Stirling’s formula noted in
the introduction:
√
Γ(z + 1) ∼ 2πz z z e−z as |z| → ∞ . (2.13)
30
Chapter 2. A Primer on the Gamma Function
Theorem 2.3. Let θ = arg (z + N), where −π < θ < π, and denote
the absolute error by
Z ∞
B2n (x)
EN,n (z) = dx .
0 2n(z + N + x)2n
Then
2n+2
1 B2n+2
|EN,n (z)| ≤ (2n + 2)(2n + 1)(z + N)2n+1 . (2.15)
cos (θ/2)
Solution: For convenience, let EN,n denote the integral error term
EN,n (6 + 13i) in this example. The first task is to translate the absolute
31
Chapter 2. A Primer on the Gamma Function
so that
GN,n EN,n
G= e ,
PN
and the requirement is
GN,n E G N,n
PN e −
N,n <ǫ.
PN
from which
2
EN,n
EN,n ǫ
|EN,n | 1 + + + · · · < .
2! 3! |GN,n /PN |
≈ 5 × 10−12
32
Chapter 2. A Primer on the Gamma Function
n N =0 N =1 N =2 N =3 N =4
1 1.9 × 10−6 1.6 × 10−6 1.3 × 10−6 1.1 × 10−6 9.7 × 10−7
2 3.7 × 10−9 2.8 × 10−9 2.2 × 10−9 1.7 × 10−9 1.3 × 10−9
3 1.9 × 10−11 1.3 × 10−11 9.1 × 10−12 6.4 × 10−12 4.4 × 10−12
4 1.9 × 10−13 1.2 × 10−13 7.3 × 10−14 4.6 × 10−14 2.9 × 10−14
5 2.9 × 10−15 1.6 × 10−15 9.3 × 10−16 5.3 × 10−16 3.1 × 10−16
Using the symbolic computation package Maple with forty digit preci-
.
sion, the absolute error in the estimate is found to be |Γ(7 + 13i) − G0,4 | =
2.5 × 10−15 , which is well within the prescribed bound.
Observe from Table 2.1 that N = 4 with n = 3 would also have
given the desired accuracy, in which case only n = 3 terms of the series
would be needed.
The reader has no doubt noticed the ad-hoc manner in which n and
N were selected in this example. The question of how best to choose n
and N given z and absolute error ǫ is not an easy one, and will not be
addressed here.
33
Chapter 2. A Primer on the Gamma Function
For each N, what is the best accuracy that can be achieved in Exam-
ple 2.1 using the error estimate in Stirling’s Series (2.15)? Throughout
this section let |UN,n | = |UN,n (6 + 13i)|. To find the minimum value of
|UN,n |, with N fixed, it is sufficient to determine the n value at which
the sequence of |UN,n | changes from decreasing to increasing, which cor-
responds to the first n such that |UN,n /UN,n+1 | ≤ 1. Using the explicit
form of the Bernoulli numbers (2.14), the task then is to find the least
n such that
(2π)2 (z + N)2 cos2 (θ/2) ζ(2n + 2)
≤1.
(2n + 2)(2n + 1) ζ(2n + 4)
That is, we want the least n satisfying
s
3 1 ζ(2n + 4)
|π(z + N) cos (θ/2)| ≤ n 1+ + 2 .
2n 2n ζ(2n + 2)
√
The square root term is at most 14π/7 when n = 1 and decreases to
1 rapidly, so that if n is chosen
n ≈ ⌈π |z + N| cos (θ/2)⌉ (2.16)
34
Chapter 2. A Primer on the Gamma Function
35
Chapter 2. A Primer on the Gamma Function
and we have
B2n+2
|UN,n | ≤
2n+1
(2n + 2)(2n + 1)N
2 (2n)!
≈ since ζ(2n + 2) ≈ 1
(2π)2n+2 N 2n+1
√
2 4πn (2n)2n e−2n
≈ from (2.13)
(2π)2n+2 N 2n+1
e−2πN
= √ upon setting n ≈ πN .
π N
36
Chapter 2. A Primer on the Gamma Function
Spouge’s formula has the very simple relative error bound ǫS (a, z)
ǫ(z)
|ǫS (a, z)| =
Γ(z + 1)(z + a)−(z+1/2) ez+a (2π)−1/2
√
a 1
< a+1/2
,
(2π) Re(z + a)
−12 6+13i
10 e
≈ √ by Stirling’s formula
2π(6 + 13i)6+13i+1/2
.
= 1.3 × 10−11 .
37
Chapter 2. A Primer on the Gamma Function
k ck (a)
0 +1.0000000000000 × 100
1 +1.3355050294248 × 105
2 −4.9293093529936 × 105
3 +7.4128747369761 × 105
4 −5.8509737760400 × 105
5 +2.6042527033039 × 105
6 −6.5413353396114 × 104
7 +8.8014596350842 × 103
8 −5.6480502412898 × 102
9 +1.3803798339181 × 101
10 −8.0781761698951 × 10−2
11 +3.4797414457425 × 10−5
12 −5.6892712275042 × 10−12
38
Chapter 2. A Primer on the Gamma Function
fractions,
1 z z · · · (z − N + 1)
a0 (r) + a1 (r) + · · · + aN (r) + ǫ(z)
2 z+1 (z + 1) · · · (z + N)
N
X bk (r)
= b0 (r) + + ǫ(z) .
z+k
k=1
Comparing this with (2.17), the bk (r) obtained from truncating the
Lanczos series are the approximate residues of Γ(z+1)(z+a)−(z+1/2) ez+a (2π)−1/2
at z = −k, and the larger N becomes the better the approximation.
39
Chapter 3
The Lanczos Formula
√
= (z + r + 1/2)z+1/2 e−(z+r+1/2) 2
Z e
× [v(1 − log v)]z−1/2 v r dv (step I)
0
√
= (z + r + 1/2)z+1/2 e−(z+r+1/2) 2
Z π/2 "√ #
r
2 v(θ) sin θ
× cos2z θ dθ (step II)
−π/2 log v(θ)
√
= (z + r + 1/2)z+1/2 e−(z+r+1/2) 2
Z π/2 " ∞
#
a0 (r) X
× cos2z θ + ak (r) cos (2kθ) dθ . (step III)
−π/2 2 k=0
The series in Equation (1.1) then follows from the last integral upon
integrating term by term.
The derivation is first retraced more or less along the same lines as
Lanczos uses in his paper [14]. This approach uses Fourier series tech-
40
Chapter 3. The Lanczos Formula
1
=
(z + 1)k (z + 1)−k
z · · · (z − k + 1)
= .
(z + 1) · · · (z + k)
41
Chapter 3. The Lanczos Formula
the integral a term similar in form to the first factor of Stirling’s series
(2.12). The integral which remains is then precisely the form required
to perform a clever series expansion.
In the exposition the following conventions have been adopted:
Proof of Lemma 3.1: For fixed z with Re(z) > −1 and α > 0 real,
replacing t with αt in (3.2) gives
Z ∞
z+1
Γ(z + 1) = α tz e−αt dt .
0
42
Chapter 3. The Lanczos Formula
axis. Hence the integral must equal Γ(z + 1) throughout the right half
α-plane.
Here r ≥ 0 is a free parameter which plays a key role later. Now reduce
the bounds of integration to a finite case via the change of variable
t = 1 − log v, dt = (−1/v)dv:
Z 0
z+1 −1
Γ(z + 1) = (z + r + 1) (1 − log v)z e−(z+r+1)(1−log v) dv
e v
Z e
z+1 −(z+r+1) 1
= (z + r + 1) e (1 − log v)z v z v r v dv
v
Z0 e
= (z + r + 1)z+1 e−(z+r+1) [v(1 − log v)]z v r dv . (3.4)
0
The integrand in this last expression is O(v σ+r−ǫ ) for every ǫ > 0 as
v → 0, while it is O(|v − e|σ ) as v → e . Equation (3.4) is therefore
valid for Re(z) > −1.3
The substitution t = 1 − log v appears to be the key to the Lanczos
method in the sense that it puts all the right pieces in all the right
places for the series expansion to come. It has the effect of peeling off
3 Thereis an error in [17, p.30, Eq.(1)] at this point concerning the domain
of convergence of the formula. There the author makes the replacement
z → z − 1/2 and states (using σ to signify what is here denoted r):
However, the integral F (z) diverges with, for example, z = −3/4 and a
choice of σ = 5/4. To see this, note that v(1 − log v) ≤ (e − v) on [0, e], so
that [v/(e − v)]5/4 ≤ [v(1 − log v)]−5/4 v 5/4 , whence
Z e Z e
5/4
[v/(e − v)] dv ≤ [v(1 − log v)]−5/4 v 5/4 dv .
0 0
Re
But the left hand side 0 [v/(e − v)]5/4 dv = ∞.
43
Chapter 3. The Lanczos Formula
1
v(1 − log v)
0 1 e
v
44
Chapter 3. The Lanczos Formula
Now denote by fr (θ) the term in square brackets in (3.6) and by fE,r (θ)
its even part [fr (θ) + fr (−θ)] /2. Noting that the odd part of the inte-
grand integrates out to zero finally puts (3.6) in the form
Z π/2
Γ(z + 1/2) = Pr (z) cos2z θ fE,r (θ) dθ (3.7)
−π/2
for Re(z) > −1/2 and r ≥ 0. Equation (3.7) is the starting point for
the series expansion in (3.1).
4 Shifting the argument to z −1/2 at the outset, that is, starting the derivation
with Z ∞
Γ(z + 1/2) = tz−1/2 e−t dt, Re(z) > −1/2
0
in place of equation (3.2), would seem more logical, considering the reason
for the shift is yet to come anyway. For some reason Lanczos does not do
this.
45
Chapter 3. The Lanczos Formula
∞
X ′
= ak (r) cos (2kθ) (3.8)
k=0
Picking up where we left off with (3.7), and using (3.8) to replace
46
Chapter 3. The Lanczos Formula
∞
√ Γ(z + 1/2) X′
= Pr (z) π ak (r)Hk (z) . (3.10)
Γ(z + 1) k=0
where again, Re(z) > −1/2, and r ≥ 0. We will see later that conver-
gence of this series can be extended to the left of Re(z) = −1/2.
47
Chapter 3. The Lanczos Formula
√
Once again let Pr (z) = 2 (z + r + 1/2)z+1/2 exp [−(z + √ r + 1/2)], re-
place z with z−1/2, and denote by fE,r (x) the even part of 2xv r / log v,
giving
Z 1
dx
Γ(z + 1/2) = Pr (z) (1 − x2 )z fE,r (x) √ . (3.14)
−1 1 − x2
The motivation this time for the replacement z → √ z−1/2 is a bit clearer:
with the introduction of the weight function 1/ 1 − x√2 , the integral
in (3.14) reveals itself as an inner product on L2[−1,1] (dx/ 1 − x2 ) which
has the Chebyshev polynomials as orthogonal basis.
Expanding fE,r (x) in a series of even Chebyshev polynomials (the
justification for which will follow) produces
∞
X ′
fE,r (x) = ck (r)T2k (x) (3.15)
k=0
so that
1 ∞
dx
Z X ′
2 z
Γ(z + 1/2) = Pr (z) (1 − x ) ck (r)T2k (x) √ ,
−1 k=0
1 − x2
48
Chapter 3. The Lanczos Formula
In the present case, the Fourier coefficients of p(θ) = cos2z (θ) are
π/2
2
Z
p̂k = cos2z (θ) cos (2kθ) dθ
π −π/2
2 Γ(z + 1/2)
=√ Hk (z) ,
π Γ(z + 1)
from Lemma 3.2, while those of fE,r are ak (r), assuming fE,r ∈ L2 [−π/2, π/2].
49
Chapter 3. The Lanczos Formula
∞
π X′
= Pr (z) p̂k ak (r)
2
k=0
∞
√ Γ(z + 1/2) X′
= Pr (z) π ak (r) Hk (z) ,
Γ(z + 1)
k=0
50
Chapter 3. The Lanczos Formula
while for k ≥ 1,
k
2X
ak (r) = C2j,2k Fr (j) . (3.18)
π j=0
51
Chapter 4
The Functions v and fE,r
(i) In the Chebyshev setting, fE,r (x) ∈ C ⌊r⌋ [−1, 1] and is of bounded
variation, so that the expansion (3.15) is justified.
(ii) In the Fourier setting, fE,r (θ) ∈ C ⌊2r⌋ [−π/2, π/2] and is of bounded
variation, thus justifying the expansion (3.8).
(n)
(iii) fE,r ∈ L1 [−π/2, π/2], where n = ⌈2r⌉.
52
Chapter 4. The Functions v and fE,r
x2 − 1
= wew
e
whence
x2 − 1
w=W
e
so that
2
v x −1
= exp W
e e
2
x −1
e
= x2 −1
. (4.2)
W e
For −1 < x < 0, 0 < v < 1 which corresponds to the real branch
W−1 of the Lambert W function. For 0 < x < 1, 0 < v < e which
corresponds to the branch W0 . Using the Heaviside function H(x), and
letting y(x) = (x2 − 1)/e, v can thus be expressed as the single formula
53
Chapter 4. The Functions v and fE,r
v(x)
e
−1 0 1
54
Chapter 4. The Functions v and fE,r
while its even part fE,r (x) = [fr (x) + fr (−x)] /2 becomes
y(x) r
y(x)
r
fE,r (x) |x| W0 (y(x)) W−1 (y(x))
= √ − . (4.4)
er 2 1 + W0 (y(x)) 1 + W−1 (y(x))
√
Plots of 2 fE,r (x)/er are shown in Figure 4.2 for various values of
r. Notice the value of the function at the endpoints is one√for the four
values√of r plotted. It is clear from the definition fr (x) = 2 xv r / log v
that 2 fE,r (±1)/er = 1 if r ≥ 0 and is +∞ otherwise. Also notice
the extreme behaviour of the derivative of fE,r at the end points for
r = 0. Lanczos notes that the parameter r serves to smooth out this
singularity and hence improve the convergence of the Fourier series of
fE,r .
r=0
r=1
r=2
r=4
−1 0 1
√
Figure 4.2: 2 fE,r (x)/er , −1 ≤ x ≤ 1, r = 0, 1, 2, 4
55
Chapter 4. The Functions v and fE,r
d 2x
v(x) = . (4.5)
dx log v
In this section it is shown that v r , fr (and hence fE,r ) have ⌊r⌋ contin-
uous derivatives on [−1, 1].
Smoothness on (−1, 1]
where the series on the right hand side converges absolutely and uni-
formly for |v − 1| < 1. Denote this right hand side √ by h(v) and set
φ(v, x) = x−h(v). Now φ(1, 0) = 0, ∂φ/∂v|(1,0) = −1/ 2, and φ ∈ C ∞
at (1, 0). Therefore by the implicit function theorem there are open sets
U ⊂ R2 and W ⊂ R and a function g : W → R such that g ∈ C ∞ ,
(1, 0) ∈ U, 0 ∈ W , g(0) = 1, and φ(g(x), x) = 0 for every x ∈ W (see
[19, p.103]). Thus v = g(x) is smooth about x = 0.
Thus v is smooth as a function of x on (−1, 1], from which v r and
fr are as well.
56
Chapter 4. The Functions v and fE,r
Smoothness at x = −1
d j r
x v (log v)k =
dx
jxj−1 v r (log v)k + 2rxj+1 v r−1 (log v)k−1 + 2kxj+1 v r−1 (log v)k−2
which can be abstracted as
d/dx
G(j, r, k) −→ jG(j−1, r, k)+2rG(j+1, r−1, k−1)+2kG(j+1, r−1, k−2) .
(4.7)
In the present case, v r corresponds to G(0, r, 0), while fr corresponds
to 2G(1, r, −1). Now beginning with G(0, r, 0) and differentiating re-
peatedly, (4.7) shows that with each derivative, the k component of
any resulting G terms remains zero or negative while the least r com-
ponent reduces by 1. By (4.6), at least the first ⌊r⌋ derivatives of v r
are bounded as x =→ −1+ . The case r = 1 shows that this bound is
best possible. Since fr (x) = d/dx[v r+1 /(r + 1)], it follows that fr has
⌊r⌋ derivatives as well.
In summary
57
Chapter 4. The Functions v and fE,r
Theorem 4.1.
58
Chapter 4. The Functions v and fE,r
Corollary 4.1. fr (x) and fE,r (x) are functions of bounded variation
on [−1, 1].
From the continuity of fE,r (x) and Corollary 4.1 it follows that
fE,r (x) may be expressed as a uniformly convergent Chebyshev series
on [−1, 1].
59
Chapter 4. The Functions v and fE,r
d
[G(β, j, k, l; θ)] = 2βG(β − 1, j + 1, k + 1, l − 1; θ)
dθ
+jG(β, j − 1, k + 1, l; θ)
−kG(β, j + 1, k − 1, l; θ)
+2lG(β − 1, j + 1, k + 1, l − 2; θ) . (4.9)
60
Chapter 4. The Functions v and fE,r
The next section shows how to use these lemmas to determine lower
bounds on the order to which G is continuously differentiable at −π/2.
4.3.3 Application to fr
We arrive finally at the analysis of fr .
61
Chapter 4. The Functions v and fE,r
Proof of Theorem 4.3: This follows from Theorems 4.2 and 4.1 upon
writing x = sin θ and
dfr dfr dx
= .
dθ dx dθ
(n)
Theorem 4.5. fr (π/2) = 0 for n a positive odd integer.
62
Chapter 4. The Functions v and fE,r
√
Proof of Theorem 4.5: Again identify fr (θ)/ 2 with G(r, 1, 0, −1; θ)
and consider the associated sequence of derivatives
By (4.10), each term of every odd order derivative has deg (z) > 0. Thus
each term of every odd order derivative has a factor of cos θ which is
zero at θ = π/2.
(n)
Theorem 4.6. fr ∈ L1 [−π/2, π/2] where n = ⌈2r⌉.
63
Chapter 4. The Functions v and fE,r
R π/2 (n)
then convergence of −π/2
|fr | dθ is guaranteed. That is, if n is the
(n)
largest integer such that n < 2r + 1, which is just ⌈2r⌉, then fr ∈
L1 [−π/2, π/2].
The results of Theorems 4.3, 4.4, 4.5 and 4.6 extend easily to the
even part of fr :
fr (θ) + fr (−θ)
fE,r (θ) = .
2
Then
64
Chapter 4. The Functions v and fE,r
for coefficients
π/2
2
Z
ak (r) = fE,r (θ) cos (2kθ) dθ .
π −π/2
and ∞
1 X
1 = a0 (r) + ak (r) , (4.12)
2 k=1
useful facts for testing the accuracy of computed ak (r)’s and examining
the error later.
Conclusions (ii) and (iii) of Corollary 4.2 allow the determination
of a growth bound on the ak (r):
Theorem 4.7. As k → ∞, ak (r) = O(k −⌈2r⌉ ).
whence
ak (r) = O(k −⌈2r⌉ ) . (4.14)
65
Chapter 5
Analytic Continuation of
Main Formula
At the end of the derivation of the main formula (1.1) in Section 3.1, it
X′
was noted that convergence of the series Sr (z) = ak (r)Hk (z) could
be extended to the left of the line Re(z) = −1/2. In this chapter this
statement is made precise by proving in detail that the series converges
absolutely and uniformly compact subsets of the half plane Re(z) >
−r excluding the negative integers. It is then shown that the main
formula (1.1) is analytic, and thus defines Γ(z + 1) in this region.
Begin with some notation:
Definition 5.1. For r ≥ 0, define the open set
Proof of Theorem 5.1: First, observe that each term of the series is
analytic on Ωr . Now suppose K is a compact subset of Ωr , let D(0, R)
66
Chapter 5. Analytic Continuation of Main Formula
Γ(z + 1)Γ(z + 1)
Hk (z) =
Γ(z + 1 + k)Γ(z + 1 − k)
Γ(1 − z + k)
= h(z) , say.
Γ(1 + z + k)(k − z)
Γ(1 − z + k)
Γ(1 + z + k)(k − z)
√
2πez−k (k − z)k−z+1/2 (1 + δ1 (k)) 1
=√
2πe−z−k (k + z)k+z+1/2 (1 + δ2 (k)) (k − z)
k z 1/2
2z k−z 1 k−z 1 (1 + δ1 (k))
=e
k+z (k − z)(k + z) k+z (k − z) (1 + δ2 (k))
67
Chapter 5. Analytic Continuation of Main Formula
k 1/2
2R k+R k+R (1 + δ1 (k)) 1
≤e
k−R k−R (1 + δ2 (k)) (k − R)2σ+1
k 1/2
2R k+R k+R (1 + δ1 (k)) 1
≤e
k−R k−R (1 + δ2 (k)) (k − R)2(−r+δ)+1
= O k 2r−2δ−1
.
on K.
Since |ak (r)| = O(k −⌈2r⌉ ) from Theorem 4.7, it then follows that
68
Chapter 5. Analytic Continuation of Main Formula
is analytic on Ωr .
√
Proof of Theorem 5.2: The factor 2π (z + r + 1/2)z+1/2 is analytic
since Re(z + r + 1/2) > 0 on Ωr . The factor exp [−(z + r + 1/2)] is
entire. The series factor is analytic by Theorem 5.1.
Theorem 5.3.
∞
√ z+1/2 −(z+r+1/2)
X ′
Γ(z + 1) = 2π (z + r + 1/2) e ak (r)Hk (z)
k=0
on Ωr .
69
Chapter 5. Analytic Continuation of Main Formula
and that, except for the simple poles at z = −1, −2, . . . which are
present on both sides of the equation, z = −r −1/2 is the first singular-
ity encountered as z moves into the left-hand plane. Lanczos’ statement
seems to be based on a principle similar to that for the power series
of an analytic function, that the radius of convergence of the series
extends to the first singularity. Closer in spirit is a result from the the-
ory of Dirichlet series: suppose f (s) is an analytic function P on the half
plane Re(s) > c1 , and f (s) is expressible as a series f (s) = ∞ k=1 qk k
−s
70
Chapter 5. Analytic Continuation of Main Formula
25
0
0 k 150
−(2r+1)
Figure 5.1: Growth of ak (r), k , and k −⌈2r⌉ with r = 1.25
71
Chapter 6
Computing the Coefficients
72
Chapter 6. Computing the Coefficients
Now denote the 2k th Chebyshev polynomial by T2k (x) = kj=0 C2j,2k x2j ,
P
and from (B.2) the coefficients in the Chebyshev series are defined as
2 1 dx
Z
ck (r) = fE,r (x)T2k (x) √ .
π −1 1 − x2
The Chebyshev series coefficients are then given by
2 1 dx
Z
ck (r) = fE,r (x)T2k (x) √
π −1 1 − x2
" k #
2 1 dx
Z X
= fE,r (x) C2j,2k x2j √
π −1 j=0
1 − x2
Z 1 " k
#
2 X dx
= fE,r (x) (−1)k C2j,2k (1 − x2 )j √ using (B.3)
π −1 j=0
1 − x2
k 1
2X dx
Z
k
= (−1) C2j,2k (1 − x2 )j fE,r (x) √
π j=0 −1 1 − x2
k
2Xk
= (−1) C2j,2k Fr (j) . (6.2)
π j=0
73
Chapter 6. Computing the Coefficients
The corresponding ak (r) appearing in (1.1) are then ak (r) = (−1)k ck (r).
Although this method provides a convenient closed form for com-
puting individual coefficients, it has the drawback of requiring generous
floating point precision to handle the addition of large values of alter-
nating sign which arise in intermediate calculations. This problem can
be partially overcome with the observation that each summand of (6.2)
contains a factor of exp (r), so this term can be factored out to reduce
the size of intermediate calculations. Even so, the problem persists.
For example, consider the calculation of c6 (6) using (6.2). In Table 6.1
the individual terms are listed; note the order of the largest scaled sum-
mands. Yet the sum of the last column is only 1.711 × 10−10 , which
.
when multiplied by 2e6 /π, yields c6 (6) = 0.00000004396.
74
Chapter 6. Computing the Coefficients
a0 = 2Fr (0)
a0 2
a1 = Fr (1) −
2 1
a0 3 4
a2 = Fr (2) − − a1 ,
2 2 1
75
Chapter 6. Computing the Coefficients
Γ(z + 1)Γ(z + 1)
Hk (z) =
Γ(z − k + 1)Γ(z + k + 1)
z · · · (z − k + 1)
= . (6.5)
(z + 1) · · · (z + k)
(N − k)(N + k)
Hk (N − 1) = Hk (N), 0≤k ≤N −1 (6.6)
N2
and
(2N)!
Hk (N) = HN (N), 0≤k≤N . (6.7)
(N − k)!(N + k)!
Now
∞
z+1/2 −(z+r+1/2)
√ X ′
Γ(z + 1) = (z + r + 1/2) e 2π ak (r)Hk (z) , (6.8)
k=0
and
∞
z−1/2 −(z+r−1/2)
√ X ′
zΓ(z) = z(z + r − 1/2) e 2π ak (r)Hk (z − 1) .
k=0
By the fundamental recursion, the right hand sides of these two equa-
tions are equal, from which it follows that
X′
ez(z + r − 1/2)z−1/2 ak (r)Hk (z − 1)
1= X′ . (6.9)
(z + r + 1/2)z+1/2 ak (r)Hk (z)
p
For a0 (r), set z = 0 in equation (6.8) to get a0 (r) = 2e/[π(r + 1/2)] er .
For N ≥ 1, set z = N in (6.9). Both series then terminate, one at the
76
Chapter 6. Computing the Coefficients
N − 1 term while the other at the Nth term. Isolating aN (r) and
simplifying using (6.6) and (6.7) then yields
N −1
" N −1/2 #
X N + r − 1/2 (N − k)(N + k) ak (r)
aN (r) = (2N)! e −1 .
k=0
N + r + 1/2 N(N + r + 1/2) (N − k)!(N + k)!
As with the Horner type of method, this method is not very efficient
for computing a single an (r) value, but is more practical due to recycling
of previous computations when a list of coefficients is required.
For the discrete analogue, fix an integer N ≥ 2 and consider the values
xj = j/N, j = 1, . . . , N in [0, 1]. Then for 1 ≤ n, m ≤ N,
N
1 X 2πinxj −2πimxj 1 if n = m,
e e = (6.11)
N j=1 0 else .
77
Chapter 6. Computing the Coefficients
For this reason, the smoother the function f being approximated, the
more rapidly its Fourier coefficients bj decrease to zero, and hence the
better the one term approximation b̄j ≈ bj .
78
Chapter 6. Computing the Coefficients
N
2 X π(k − 1/2) nπ(k − 1/2)
= f cos cos .
N k=1 N N
This calculation can be economized since fE,r (xk ) and T2n (xk ) are even
with respect to xk , hence, taking N even,
N/2
4 X
n π(k − 1/2) 2nπ(k − 1/2)
ān = (−1) fE,r cos cos .
N k=1 N N
Furthermore, the N values fE,r (x1 ), . . . , fE,r (xN ) need only be com-
puted once since they do not depend on n, the index of the coefficient
being computed.
Selecting N = 20 was sufficient to reproduce a0 , . . . , a5 to six dec-
imals as given in Lanczos’ original paper for r = 1, 1.5, 2, 3. In fact,
N = 20 is really only needed for r = 1, as the error |cn (r) − c̄n | appears
79
Chapter 6. Computing the Coefficients
80
Chapter 6. Computing the Coefficients
This makes for increased efficiency of the formula when multiple eval-
uations are required once a final set of ak (r) coefficients has been com-
puted. For this purpose, it is worthwhile deriving the general formula
for the decomposition.
Recall the infinite series portion of the main formula is
1 z z(z − 1)
Sr (z) = a0 (r) + a1 (r) + a2 (r) +···
2 z+1 (z + 1)(z + 2)
∞
X ′
= ak (r)Hk (z) .
k=0
N
X bk (r)
Sr,N (z) = b0 (r) + .
k=1
z+k
Observe that for k ≥ 1, bk (r) is the residue of Sr,N (z) at z = −k, while,
upon taking z → ∞,
N
X
b0 (r) = a0 (r)/2 + ak (r) .
k=1
Now
(
(k+j−1)!
(−1)k−j+1 (k−j)![(j−1)!] 2 if 1 ≤ j ≤ k,
Res [Hk (z)]z=−j =
0 else ,
so that
k
X (k + j − 1)! 1
Hk (z) = 1 + (−1)k−j+1 2
.
j=1
(k − j)![(j − 1)!] z + k
81
Chapter 6. Computing the Coefficients
82
Chapter 6. Computing the Coefficients
√
The only floating point calculations are in the term ( 2er+1/2 /π)Fr , so
that if multiple sets of coefficients are required for different values of r,
only this term need be recomputed.
Still further efficiency can be realized by observing that the leading
r+1/2
e term of the b(r) coefficients cancels once inserted into (6.13).
Letting u(z) denote the vector of rational functions 1, (z+1)−1 , . . . , (z+
1)−k , equation (6.13) becomes simply
r z+1/2
e z + r + 1/2
Γ(z + 1) ≈ 2 uT (z)BCFr . (6.14)
π e
83
Chapter 7
Lanczos’ Limit Formula
The final result of Lanczos’ paper [14] is the beautiful limit formula,
stated here as
84
Chapter 7. Lanczos’ Limit Formula
z+1/2 Z π/2
2 (v/e)r sin θ
z + r + 1/2
= cos2z θ dθ .
e −π/2 log v
A rescaling of r to er produces the r z term of equation (7.1) and elim-
inates the exponential term in the limit, thus
z+1/2 Z π/2
r 1/2 (v/e)er sin θ
z z + er + 1/2
Γ(z + 1/2) = 2r cos2z θ dθ
er −π/2 log v
z+1/2 Z π/2
z z + er + 1/2
= 2r cos2z θ r 1/2 gr (θ) dθ say,
er −π/2
(7.2)
where
sin θ
gr (θ) = (v/e)er.
log v
Now gr (θ) converges point-wise to zero rapidly on [−π/2, π/2) with
increasing r, but equals one at θ = π/2. As a result, as r increases,
(r/π)1/2 gr (θ) takes on the appearance of a the Gaussian distribution
(r/π)1/2 exp [−r(θ − π/2)2]. Plots of exp [−r(θ − π/2)2 ] − gr (θ) for in-
creasing values of r show a convincing fit; see Figure 7.1.
Furthermore,
Z ∞
1 2 2
√ cos (2kθ)r 1/2 e−r(θ−π/2) dt = (−1)k e−k /r .
π −∞
so that if gr (θ) is replaced by exp [−r(θ − π/2)2 ] and the bounds of
integration [−π/2, π/2] are extended to (−∞, ∞), the integral gives
2
rise to the terms e−k /r upon writing cos2z θ as its Fourier series.
85
Chapter 7. Lanczos’ Limit Formula
0.08
r=1
r=6
r=10
− π2 0 π
2
Z π/2
z 2
≈ 2r cos2z θ r 1/2 e−r(θ−π/2) dθ
−π/2
Z ∞
z 2
≈r cos2z θ r 1/2 e−r(θ−π/2) dθ
−∞
86
Chapter 7. Lanczos’ Limit Formula
∞
!
Z ∞ ′ 2 Γ(z + 1/2) 2
X
= rz √ Hk (z) cos (2kθ) r 1/2 e−r(θ−π/2) dθ
−∞ 0
π Γ(z + 1)
∞
z Γ(z + 1/2) X′ 2
= 2r (−1)k e−k /r Hk (z) ,
Γ(z + 1) 0
Clearly δr → 0 as r → ∞.
87
Chapter 7. Lanczos’ Limit Formula
Proof of Lemma 7.1: On (−π/2, π/2], h(θ) = fer (θ)v −er 2−1/2 , which
is continuous by Theorem 4.3. At θ = −π/2,
The next lemma shows that the integral in equation (7.2) is con-
centrated on the small interval Ir containing π/2:
Then
lim r z I1 (r, z) = 0 .
r→∞
sin2 x
lim =1,
x→0 x2
88
Chapter 7. Lanczos’ Limit Formula
Z v er
z 2z 1/2
≤ C1 r
cos θ r dθ for some C1 > 0 by Lemma 7.1
Jr e
er
σ+1/2 v(π/2 − δr )
≤ C1 r |Jr | since v is increasing on Jr
e
2
= C2 r σ+1/2 e−er cos (π/2−δr )/v
by the implicit definition of v(θ)
2
≤ C2 r σ+1/2 e−er cos (π/2−δr )/e
2
= C2 r σ+1/2 e−r sin δr
2
≤ C2 r σ+1/2 e−rδr /2 for δr sufficiently small, by (7.4)
√
= C2 r σ+1/2 e−(1/2) r/ log r
−→ 0 as r → ∞
The next lemma justifies the replacement of gr (θ) by exp [−r(θ − π/2)2 ]
on Ir :
89
Chapter 7. Lanczos’ Limit Formula
Then
lim r z I2 (r, z) = 0 .
r→∞
v er sin θ r(θ−π/2)2
Z
σ 2σ 1/2 −r(θ−π/2)2
≤r cos θ r e
e e − 1 dθ .
Ir log v
(i)
v er sin θ r(θ−π/2)2
e e − 1 χr −→ 0
log v
uniformly, and
(ii) that Z
σ 2
r cos2σ θ r 1/2 e−r(θ−π/2) dθ
Ir
remains bounded.
e(log v − 1) + (θ − π/2)2
1 1 4 1 135 60
(θ − π/2)6 + O (θ − π/2)6 .
= − (θ − π/2) + −4 − 2 +
3 e 90 e e
90
Chapter 7. Lanczos’ Limit Formula
Now the coefficient (1/3 − 1/e) < 0, so that for |θ − π/2| sufficiently
small,
−(θ − π/2)4 ≤ e(log v − 1) + (θ − π/2)2 ≤ 0 ,
so that, for r sufficiently large,
whence
and hence
1
≤ er(log v − 1) + r(θ − π/2)2 χr ≤ 0 .
−r
r log r
cos2σ θ ≤ (θ − π/2)2σ .
Thus
Z
σ 2
r cos2σ θ r 1/2 e−r(θ−π/2) dθ
Ir
Z
σ 2
≤r (θ − π/2)2σ r 1/2 e−r(θ−π/2) dθ
Ir
Z ∞
σ 2
≤r (θ − π/2)2σ r 1/2 e−r(θ−π/2) dθ .
π/2
91
Chapter 7. Lanczos’ Limit Formula
∞
1 u σ−1/2
Z
= r σ−1 r 1/2 e−u du
2 0 r
∞
1
Z
= uσ−1/2 e−u du
2 0
1
= Γ(σ + 1/2) .
2
That is,
1
Z
σ 2
r cos2σ θ r 1/2 e−r(θ−π/2) dθ ≤ Γ(σ + 1/2) .
Ir 2
Then
lim r z I3 (r, z) = 0 .
r→∞
92
Chapter 7. Lanczos’ Limit Formula
Z
σ 2
≤r r 1/2 e−r(θ−π/2) dθ
Kr
Z ∞
σ 2
=r √
e−u du ,
rδr
√
this last line a result of the change of variables u = r(θ − π/2). This
can be expressed in terms of the error function
Z x
2 2
erf(x) = √ e−t dt
π 0
2
e−x
≈ 1− √ as x → ∞ .
πx
√
−( rδr )2
σ1e
≈r √
2 rδr
1 1/2
= r σ−1/4 (log r)1/4 e−(r/ log r)
2
−→ 0 as r → ∞ ,
93
Chapter 7. Lanczos’ Limit Formula
∞
1
Z
i2k+πr 2
k −k 2 /r
= (−1) e √ r 1/2 e−r(θ− 2r
)
dθ upon completing the square
π −∞
∞
1
Z
k −k 2 /r 2
= (−1) e √ r 1/2 e−rθ dθ with a simple contour shift
π −∞
2 /r
= (−1)k e−k .
z+1/2
z z + er + 1/2
= 2r
er
1 ∞
Z
2z 1/2 −r(θ−π/2)2
× I1 (r, z) + I2 (r, z) − 2I3 (r, z) + cos θ r e dθ .
2 −∞
94
Chapter 7. Lanczos’ Limit Formula
Now let r → ∞ and apply Lemmas 7.2, 7.3 and 7.4 to get
Z ∞
z1 2
Γ(z + 1/2) = 2 lim r cos2z θ r 1/2 e−r(θ−π/2) dθ .
r→∞ 2 −∞
Finally, replace cos2z θ with its Fourier series (see Section 3.3)
∞
X ′ 2 Γ(z + 1/2)
2z
cos θ = √ Hk (z) cos (2kθ)
0
π Γ(z + 1)
and integrate term by term using Lemma 7.5 to get
∞
z Γ(z+ 1/2) X′ 2
Γ(z + 1/2) = 2 lim r (−1)k e−k /r Hk (z) .
r→∞ Γ(z + 1) 0
Canceling the Γ(z + 1/2) terms and moving Γ(z + 1) to the left hand
side completes the proof.
∞ r
z+1/2 X
z z + er + 1/2 ′ πr ak (er)
= 2r Hk (z)
er 2 eer
k=0
∞
X ′ 2 /r
z
≈ 2r (−1)k e−k Hk (z) .
0
95
Chapter 7. Lanczos’ Limit Formula
er
∼ (−1)k e−k /r ,
2 e
or, by rescaling r,
r
r−ek 2 /r 2e
Q(k, r) = (−1)k e (ak (r))−1 ∼ 1 .
πr
Plots of log Q(k, r) for r = 10, 20 and 50 support this conjecture; refer
to Figure 7.2, 7.3 and 7.4, taking note of the different vertical scales.
90
−90
0 k 24
96
Chapter 7. Lanczos’ Limit Formula
35
−35
0 k 24
−2
0 k 24
97
Chapter 8
Implementation & Error
Discussion
98
Chapter 8. Implementation & Error Discussion
Γ(1 + z) − G(1 + z)
ǫρ (z) =
Γ(1 + z)
ǫα (z)
= . (8.1)
Γ(1 + z)
From the definition of ǫρ (z), and assuming |ǫρ (z)| < 1, Γ(1 + z) may
be expressed
G(1 + z)
Γ(1 + z) = (8.2)
1 − ǫρ (z)
99
Chapter 8. Implementation & Error Discussion
where Sr,n (z) is the series (1.2) truncated after a finite number of terms,
and he calls ǫr,n (z) the relative error. It turns out that Sr,n (z) ≈ 1 in
the right half plane, so that Lanczos’ notion is very close to (8.4). The
relationship between ǫr,n (z) and (8.1) is
ǫr,n (z)
ǫρ = .
Γ(z + 1)(z + r + 1/2)−(z+1/2) ez+r+1/2 (2π)−1/2
Spouge makes this same observation in [27] and shows that for Re(z) ≥
0, the denominator is bounded below by
e 1/2
|Γ(z + 1)(z + r + 1/2)−(z+1/2) ez+r+1/2 (2π)−1/2 | ≥ ,
π
100
Chapter 8. Implementation & Error Discussion
101
Chapter 8. Implementation & Error Discussion
H0 (z) = 1 ,
and for k ≥ 1,
Γ(z + 1)Γ(z + 1)
Hk (z) =
Γ(z − k + 1)Γ(z + k + 1)
z · · · (z − k + 1)
=
(z + 1) · · · (z + k)
2. Write (1.1) as
√
Γ(z + 1) = 2π (z + r + 1/2)z+1/2 e−(z+r+1/2) [Sr,n (z) + ǫr,n (z)]
where n
X ′
Sr,n (z) = ak (r)Hk (z)
k=0
and ∞
X
ǫr,n (z) = ak (r)Hk (z) .
k=n+1
102
Chapter 8. Implementation & Error Discussion
3. Let
4. Let n
X ′
ηr,n (θ) = fE,r (θ) − ak (r) cos (2kθ) ,
k=0
103
Chapter 8. Implementation & Error Discussion
(ii) B > C. For this second case, for any compact K ⊂ Ω, denote
by MK the maximum of |f | on K, and consider the sequence of
closed right-half semi-disks {Ωj }∞
j=1 centered at the origin, each
Ωj having radius j and diameter on the line Re(z) = σ. By the
maximum modulus principle,
MΩj = M∂Ωj ,
∂Ωj = Ij ∪ Aj ,
lim M∂Ωj = B ,
j→∞
so that
lim max {MIj , MAj } = B ,
j→∞
but
lim MAj = C < B .
j→∞
Thus
lim MIj = B
j→∞
104
Chapter 8. Implementation & Error Discussion
105
Chapter 8. Implementation & Error Discussion
and Lanczos’ comments relate the error in truncating the series in (1.1)
after the nth term, ǫr,n (z), to the error ηr,n (θ) in the (n+1)-term Fourier
series approximation of fE,r (θ). Based apparently on this observation,
he gives uniform error bounds for Re(z) ≥ 0 as listed in Table 8.1.
The steps connecting Lanczos’ observation about ηr,n (θ) to the uni-
form error bounds in the table elude me, and it is unclear how he makes
the leap from one to the other. One of his error bounds does appear
incorrect, although this may simply be a matter of round-off. For r = 4
and n = 4, he gives |ǫ4,4 (z)| < 5×10−8 . It is a simple matter to compute
directly
4
a0 (4) X
lim |ǫ4,4 (it)| = 1 − − ak (4)
t→∞ 2 k=1
.
= 5.3 × 10−8 .
106
Chapter 8. Implementation & Error Discussion
n r θr,n
1 1 −π/2
1 1.5 0
2 2 −π/2
3 2 ≈ −0.72
3 3 −π/2
4 4 0
6 5 −π/2
n r |ǫ∞
r,n |
1 1 8.0 × 10−4
1 1.5 2.2 × 10−4
2 2 5.0 × 10−5
3 2 9.1 × 10−7
3 3 1.1 × 10−6
4 4 5.3 × 10−8
6 5 1.9 × 10−10
107
Chapter 8. Implementation & Error Discussion
π/2
1 Γ(z + 1)
Z
= √ cos2z θηr,n (θ) dθ
π Γ(z + 1/2) −π/2
1 Γ(z + 1) π/2
Z
|ǫr,n (z)| ≤ √ cos2σ θ|ηr,n (−π/2)| dθ
π Γ(z + 1/2)
−π/2
108
Chapter 8. Implementation & Error Discussion
109
Chapter 8. Implementation & Error Discussion
case (i): MΩ = 0
This case is impossible, for otherwise the result would be a closed form
expression
√
Γ(z + 1) = 2π (z + r∗ + 1/2)z+1/2 e−(z+r∗ +1/2) Sr∗ ,n (z)
which could be continued analytically to the slit plane C \ (−∞, −r∗ −
1/2), but which would remain bounded near −(n + 1), −(n + 2), . . .,
contrary to the unboundedness of Γ at the negative integers.
k−1
Y it/(1 − t) − j
=
j=0
it/(1 − t) + j + 1
k−1
Y t(i + j) − j
= .
j=0
t(i − j − 1) + j + 1
The Hk (z(t)) expressions in this last form are numerically stable and
readily computable for 0 ≤ t < 1. The upshot of this observation is that
110
Chapter 8. Implementation & Error Discussion
under the assumption that the first few terms of series ǫr,n (z) contain
the bulk of the error, which turned out to be the case in practice, the
error can be easily estimated as
M
X
ǫr∗ ,n (z(t)) ≈ ak (r∗ )Hk (z(t))
k=n+1
for M not too large. The problem of bounding |ǫr∗ ,n (z)| in the right half
plane reduces to that of estimating the maximum of M
P
k=n+1 ak (r∗ )Hk (z(t))
for t ∈ [0, 1).
With this strategy, the existence of zeros of ǫ∞ r,n is no longer an
issue; for any r ≥ 0, simply examine the finite series approximation
of |ǫr,n (z(t))|, 0 ≤ t < 1. It turned out in practice, however, that for
each n examined, setting r to be the largest zero of ǫ∞ r,n produced the
smallest uniform error bound for |ǫr,n (z(t))|.
8.5.2 Zeros of ǫ∞
r,n
The question remains, for n fixed, does ǫ∞ r,n have zeros as a function of
r? The answer was found to be yes in all cases examined. Refer to Fig-
ure 8.1 for plots of [ǫ∞
r,n ]
1/5
, n = 0, 1, 2, 3, showing the location of zeros
2
for the first few cases. The plots of Figure 8.1 motivated an extensive
numerical investigation resulting in empirical data on the number, the
smallest and the largest zeros of ǫ∞ r,n n = 0, . . . , 60. The results of this
investigation are listed in Tables C.1 and C.2 of Appendix C. From
these tables, ǫ∞r,n appears to have about 2n zeros, and the largest zero
appears to be about size n. Beyond these superficial observations, there
is no more obvious trend in the data.
111
Chapter 8. Implementation & Error Discussion
1/5
[ǫ∞
r,1 ]
1/5
[ǫ∞
r,3 ]
r
5
1/5
[ǫ∞
r,2 ]
1/5
[ǫ∞
r,0 ]
−1
1/5
Figure 8.1: ǫ∞
r,n , n = 0, 1, 2, 3, −1/2 < r < 6
That is,
6
" #
√ X ′
Γ(z+1) = 2π (z+r+1/2)z+1/2 e−(z+r+1/2) ak (r)Hk (z) + ǫr,6 (z) .
k=0
112
Chapter 8. Implementation & Error Discussion
.
Table 8.4 summarizes the results. Notice that the largest zero, r =
6.779506 yields the least maximum error of 2.72 × 10−12 , nearly 1/100
of Lanczos’ estimate.
j rj t/(1 − t) Mrj ,6
0 −0.117620 0.565599 4.71 × 10−4
1 0.684391 1.525502 2.75 × 10−6
2 1.450013 2.388058 8.88 × 10−8
3 2.182290 3.172714 6.78 × 10−9
4 2.883225 3.887447 9.30 × 10−10
5 3.553321 4.538907 1.99 × 10−10
6 4.191832 5.133674 6.07 × 10−11
7 4.796781 5.678871 2.49 × 10−11
8 5.364813 6.183352 1.30 × 10−11
9 5.891184 6.660957 8.02 × 10−12
10 6.372580 7.137483 5.29 × 10−12
11 6.779506 7.883760 2.72 × 10−12
Plots of log ǫrj ,6 (it/(1 − t)) for j = 0, 1, . . . , 11 are shown in Fig-
ure 8.2. Observe that the maximum of the curve associated with the
largest zero r11 gives the smallest maximum error of approximately
exp (−26.6).
To get an idea of how the maximum error along the imaginary axis
varies with r , refer to Figure 8.3 in which log |ǫr,4 (it/(1 − t))| is plotted
for 0 < t < 1, 0 < r < 5. The ends of the deep troughs near t = 1
correspond to the zeros of ǫ∞ 3
r,4 . The projection of the troughs onto
the t − r plane are not lines parallel to the t-axis, but rather slowly
varying curves. Bearing in mind that the vertical scale on the plot is
logarithmic, the effect of the the r parameter on the error is dramatic.
3 Forgraphing purposes the t range was truncated slightly before t = 1, and
so the vertical asymptotes associated with the ends of the troughs are not
present in the plot.
113
Chapter 8. Implementation & Error Discussion
0 t 1
−7 r0
r1
r2
r3
−21
r11
−35
Figure 8.2: log ǫrj ,6 (it/(1 − t)), 0 < t < 1, j = 0, . . . , 11
114
Chapter 8. Implementation & Error Discussion
−6
−22
0
t
1 r
5
Figure 8.3: log |ǫr,4 (it/(1 − t))|, 0 < t < 1, 0 < r < 5
Note that we use r to denote what Lanczos called γ to avoid confusion with
Euler’s constant. It turns out that for n = 1, r = 1.5 is coincidentally very
close to 1.489193 · · · , the largest zero of ǫ∞
r,1 .
115
Chapter 8. Implementation & Error Discussion
k dk
0 +2.48574089138753565546 × 10−5
1 +1.05142378581721974210 × 100
2 −3.45687097222016235469 × 100
3 +4.51227709466894823700 × 100
4 −2.98285225323576655721 × 100
5 +1.05639711577126713077 × 100
6 −1.95428773191645869583 × 10−1
7 +1.70970543404441224307 × 10−2
8 −5.71926117404305781283 × 10−4
9 +4.63399473359905636708 × 10−6
10 −2.71994908488607703910 × 10−9
116
Chapter 8. Implementation & Error Discussion
It was observed early in the investigation that, not only does ǫ∞r,n
have zeros as a function of r, but so too do the individual ak (r). This
is not surprising, since
∞
X
ǫ∞
r,n = ak (r) ≈ an+1 (r) ,
k=n+1
[ǫ∞
r,6 ]
1/7
5
r
−0.5
Numerical checks show that the zeros of ǫ∞ r,n and an+1 (r) nearly co-
incide in the cases examined. Furthermore, at r(n) equal the largest
zero of ǫ∞
r,n , an+1 (r(n)) appears to be a very good estimate for the max-
imum of |ǫr,n (it/(1 − t))| on [0, 1). This observation seems in-keeping
with the rule of thumb for asymptotic series with rapidly decreasing
terms: the error is about the size of the first term omitted. Oddly,
though, this is not the situation here: the second coefficient omitted is
almost exactly the same size as the first, but of opposite sign. These
observations are summarized in Table 8.6 where for each n = 0, . . . , 12,
117
Chapter 8. Implementation & Error Discussion
0.5
[a7 (r)]1/7
5
r
−0.5
0.5
5
r
−0.5
listed are r(n), the estimated uniform error bound Mr(n),n,15 , and the
first two terms omitted from the series, an+1 (r(n)) and an+2 (r(n)).
118
Chapter 8. Implementation & Error Discussion
which, if true, provides a much simpler method for bounding |ǫr(n),n (z)|
in the right-half plane.
119
Chapter 9
Comparison of the Methods
120
Chapter 9. Comparison of the Methods
from which
ǫρ = 1 − e−EN,n .
We want |ǫρ | < 10−32 which means |1 − e−EN,n | < 10−32 , which is
guaranteed if
The pairs (N, n) which achieve this bound are listed in Table 9.1.
N n |UN,n | <
18 10 3.5 × 1033
10 11 2.9 × 1033
4 12 3.3 × 1033
0 13 2.6 × 1033
from which
.
Γ(20 + 17i) = −6.6530978807100357093202320786706 × 1013
+ i 1.3813486137818296429873066956513 × 1014 .
121
Chapter 9. Comparison of the Methods
In this case, the Bernoulli numbers in the sum range in size from
. .
B2 = 1.7 × 10−1 to B34 = 4.3 × 1011 , while the corresponding terms of
the sum with z = 0 range in size from 4.9 × 10−3 to 9.5 × 10−33 .
122
Chapter 9. Comparison of the Methods
(−1)k−1
dk (a) = (−k + a)k−1/2 e−k . (9.2)
(k − 1)!
Under this simplification, the relative error in Spouge’s formula remains
the same: √
a 1
|ǫS (a, z)| < .
(2π)a+1/2 Re(z + a)
123
Chapter 9. Comparison of the Methods
k dk (39) k dk (39)
0 +2.8947105233858572256681259383826×10−17 20 −1.4612043596445044389340540663592×10−1
1 +2.2677611785616408448051623649413×100 21 +1.6856896402284750130501640213058×10−2
2 −3.0458858426261684740752182066857×101 22 −1.5553541637142526870805344933539×10−3
3 +1.9357212181425501030368331204744×102 23 +1.1302120506210549847428847358464×10−4
4 −7.7429949716371535150533416191734×102 24 −6.3472060001627477262571023780708×10−6
5 +2.1876179361012802304828398825556×103 25 +2.6919634338361067149994936318051×10−7
6 −4.6438537277548560417623360405775×103 26 −8.3801786194339100406682820681103×10−9
7 +7.6927388480783573540015702664876×103 27 +1.8481319798192137441154507903452×10−10
8 −1.0195917484892786070501293142239×104 28 −2.7610263895691596705265848094462×10−12
9 +1.0999167217153749462567889811423×104 29 +2.6382697777056113806964677703858×10−14
10 −9.7740178567227019446568421903035×103 30 −1.4954804884394724082929843529081×10−16
11 +7.2136820304845623606047714747868×103 31 +4.5441806633655985113201441255734×10−19
12 −4.4462490659018128337839056551451×103 32 −6.4289947451722952263534424159053×10−22
13 +2.2961572174783000519663098692718×103 33 +3.4516439287785093504202451110646×10−25
14 −9.9491746363493229520431441706026×102 34 −5.1380351805226988204329822384216×10−29
15 +3.6160749560761185688455782354331×102 35 +1.2606607461787976943234811185446×10−33
16 −1.1004495488313215123003718194264×102 36 −1.9452281496812994762288072286241×10−39
17 +2.7947852816195578994204731195154×101 37 +2.2292761113182594463394149297371×10−47
18 −5.8947917688963354031891858182660×100 38 −2.2807244297698558308437038776471×10−60
19 +1.0259323949497584239156793246602×100
124
Chapter 9. Comparison of the Methods
9.4 Discussion
All three methods considered here have their benefits and shortfalls,
and the question of which is best is not a clearcut one. Several factors
must be considered, in particular:
125
Chapter 9. Comparison of the Methods
k dk
0 +2.0240434640140357514731512432760 × 10−10
1 +1.5333183020199267370932516012553 × 100
2 −1.1640274608858812982567477805332 × 101
3 +4.0053698000222503376927701573076 × 101
4 −8.2667863469173479039227422723581 × 101
5 +1.1414465885256804336106748692495 × 102
6 −1.1135645608449754488425056563075 × 102
7 +7.9037451549298877731413453151252 × 101
8 −4.1415428804507353801947558814560 × 101
9 +1.6094742170165161102085734210327 × 101
10 −4.6223809979028638614212851576524 × 100
11 +9.7030884294357827423006360746167 × 10−1
12 −1.4607332380456449418243363858893 × 10−1
13 +1.5330325530769204955496334450658 × 10−2
14 −1.0773862404547660506042948153734 × 10−3
15 +4.7911128916072940196391032755132 × 10−5
16 −1.2437781042887028450811158692678 × 10−6
17 +1.6751019107496606112103160490729 × 10−8
18 −9.7674656970897286097939311684868 × 10−11
19 +1.8326577220560509759575892664132 × 10−13
20 −6.4508377189118502115673823719605 × 10−17
21 +1.3382662604773700632782310392171 × 10−21
126
Chapter 9. Comparison of the Methods
127
Chapter 10
Consequences of Lanczos’
Paper
128
Chapter 10. Consequences of Lanczos’ Paper
where g(θ) ∈ L2 [−π/2, π/2] is even. Note that g is very general, the
only requirement being that it be square summable with respect to
Lebesgue measure. Then the Fourier coefficients
2 π/2
Z
ak = g(θ) cos (2kθ) dθ .
π −π/2
∞
√ Γ(z + 1/2) X′
= π ak Hk (z) . (10.1)
Γ(z + 1) k=0
Each of the functions Hk (z) is analytic for Re(z) > −1/4. Let K
be a compact subset of this region. Taking r = 1/4 in the proof of
Theorem 5.1 shows that Hk (z) = O k −1/2−δ on K for some δ > 0. By
the Schwarz inequality,
∞
!2 ∞
! ∞ !
X X X 1
|ak ||Hk (z)| ≤ C |ak |2
k=N k=N k=N
k 1+2δ
on K for some constant C which does not depend on N. The right hand
side of this inequality can be made as small as desired by choosing N
sufficiently large. Thus the series in (10.1) converges absolutely and
uniformly on K, and since K was arbitrary, (10.1) defines F (z) as an
analytic function on Re(z) > −1/4.
If in addition g (k) (θ) is continuous with g (k) (−π/2) = g (k) (π/2),
k = 0, . . . , m, m ≥ 1, then the Fourier coefficients ak = O (k −m ). For
129
Chapter 10. Consequences of Lanczos’ Paper
Since the Γ(z+1/2)/Γ(z+1) factor of (10.1) cancels the poles at the neg-
ative integers but introduces simple poles at z = −1/2, −3/2, −5/2, . . .,
the conclusion is that equation (10.1) defines F (z) as an analytic func-
tion on the half plane Re(z) > −m/2 except for z = −1/2, −3/2, −5/2, . . ..
Furthermore, if F (z) is approximated by truncating
P∞ the series in (10.1)
th
after the n term, the resulting error ǫr,n (z) = k=n+1 ak Hk (z) has the
property that in the half plane Ω = Re(z) ≥ 0,
∞
X
lim ǫr,n (z) = ak
|z|→∞
z∈Ω k=n+1
n
X
= g(0) − ak ,
k=0
130
Chapter 10. Consequences of Lanczos’ Paper
π/2 k
2
Z X
= g(θ) C2j,2k cos2j θ dθ
π −π/2 j=0
k π/2
2X
Z
= C2j,2k cos2j θg(θ) dθ
π j=0 −π/2
k
2X
= C2j,2k F (j) .
π j=0
An Example
The following example is found in [13, p.45]. The Bessel function of
the first kind of order z not necessarily an integer can be expressed
Z π/2
1 (x/2)z
Jz (x) = √ cos2z θ cos (x sin θ) dθ .
π Γ(z + 1/2) −π/2
131
Chapter 10. Consequences of Lanczos’ Paper
Letting √
F (z, x) = π Jz (x)Γ(z + 1/2)(x/2)−z ,
we see that F (z, x) is the Lanczos transform of cos (x sin θ). Hence
∞
√ Γ(z + 1/2) X′
F (z, x) = π ak (x)Hk (z)
Γ(z + 1) k=0
so that ∞
(x/2)z X′
Jz (x) = ak (x)Hk (z) .
Γ(z + 1) k=0
The coefficients ak (x) are given by
k
2X
ak (x) = C2j,2k F (j, x)
π j=0
k
2 X
=√ C2j,2k Jj (x)Γ(j + 1/2)(x/2)−j .
π j=0
2 π/2
Z
ak (x) = cos (x sin θ) cos (2kθ) dθ
π −π/2
π
2
Z
= cos (x sin(φ + π/2)) cos (2k(φ + π/2)) dφ
π 0
π
2
Z
k
= (−1) cos (x cos φ) cos (2kφ) dφ
π 0
= 2J2k (x) .
This shows how the Bessel function of even integer order 2k can be
expressed in terms of Bessel functions of integer order less than or
132
Chapter 10. Consequences of Lanczos’ Paper
equal to k:
k
1 X
J2k (x) = √ C2j,2k Jj (x)Γ(j + 1/2)(x/2)−j .
π j=0
More importantly,
∞
2(x/2)z X
Jz (x) = J2k (x)Hk (z) .
Γ(z + 1) k=0
(ii)
n
X k 2n 1
(−1)n+k =
k=1
(n − k)!(n + k)! 2
133
Chapter 10. Consequences of Lanczos’ Paper
which yields (ii) upon canceling the n! terms and moving the constant
2 to the left hand side.
134
Chapter 10. Consequences of Lanczos’ Paper
√
(z+1/2) −(z+a) a0 (r(0))
Γ(z + 1) = (z + a) e 2π + ǫr(0),0 (z)
2
√
≈ (z + a)(z+1/2) e−(z+a) 2π (10.4)
ea
=√ .
2πa
135
Chapter 10. Consequences of Lanczos’ Paper
ea
1= √
2πa
2πa = e2a
1
−2ae−2a = − .
π
.
= 0.276914
and
a = −W−1 (−1/π)/2
.
= 0.819264 .
136
Chapter 10. Consequences of Lanczos’ Paper
Lemma
p 10.3. F0 (z) strictly increases to 1 as z → ∞, and F0 (0) =
e/π .
Proof of Lemma 10.4: First, note that Fr (z) is continuous for (r, z) ∈
(−1/2, ∞) × [0, ∞). Secondly, observe that
An explicit formula for a(z) can be found with the help of the Lam-
bert W function. Suppose that z is real and positive. Then
√
Γ(z + 1) = (z + a(z))(z+1/2) e−(z+a(z)) 2π ,
so that
Γ(z + 1)
√ = (z + a(z))(z+1/2) e−(z+a(z)) .
2π
Raising both side to the power 1/(z+1/2) and then multiplying through
by −1/(z + 1/2) gives
1/(z+1/2)
−1 Γ(z + 1) z + a(z) − z+a(z)
√ =− e z+1/2
z + 1/2 2π z + 1/2
whence
1/(z+1/2) !
z + a(z) −1 Γ(z + 1)
− = Wk √
z + 1/2 z + 1/2 2π
and
1/(z+1/2) !
−1 Γ(z + 1)
a(z) = −z − (z + 1/2) Wk √ . (10.5)
z + 1/2 2π
137
Chapter 10. Consequences of Lanczos’ Paper
For real z the branches W−1 and W0 are real, yielding two solutions
which are here denoted a(−1, z) and a(0, z). Unfortunately equation (10.5)
is not of much practical use computationally without some additional
approximation of Γ itself, and the question of whether these equations
gives rise to more efficient Γ approximation schemes was not studied
further.
Equation (10.5) does, however, give a picture of what a(−1, z)
and a(0, z) look like, thanks to the Lambert W evaluation routines
of Maple 8. See Figure 10.1 for plots of these functions over the scaled
real line z/(1 − z), 0 ≤ z < 1. It is interesting to observe how little
these function appear to vary over the entire real line, in particular
a(−1, z) which is approximately 0.82 at z = 0 and decreases to about
0.79 as z → ∞.
a(−1,z/(1−z))
− 12 W−1 ( −1
π )
a(0,z/(1−z))
− 12 W0 ( −1
π )
0
0 z 1
z z
Figure 10.1: Path of a(0, 1−z ), a(−1, 1−z ), 0 ≤ z < 1
138
Chapter 11
Conclusion and Future Work
From the work done here it is clear that Lanczos’ formula merits se-
rious consideration as a viable alternative to Stirling’s series for the
efficient computation of the classical gamma function. Furthermore,
the generalized idea of the Lanczos transform described in Chapter 10
holds some promise as an approximation method for a larger class of
functions. There are, however, several unresolved issues and areas in
need of further investigation. This concluding chapter comments on
a number of unsettled questions, and conjecture answers to some of
these. Specifically:
With respect to Lanczos’ paper itself:
139
Chapter 11. Conclusion and Future Work
3. How many zeros does ǫ∞r,n have? How are they distributed? What
is the largest one? The same questions apply to the individual
coefficients ak (r).
140
Chapter 11. Conclusion and Future Work
here. In the end, the asymptotic behaviour of the ak (r) and consequent
region of convergence of Sr (z) is a minor problem since, for computa-
tional purposes, only convergence on Re(z) ≥ 0 is required, and this is
guaranteed provided r ≥ 0.
Also of more theoretical interest than practical is Lanczos’ state-
ment of the limit formula of Chapter 7. The proof there established
the validity of the formula in the right-half plane Re(z) ≥ 0, while ac-
cording to Lanczos the formula is valid for all z ∈ C away from the
negative integers. His claim appears to be based on the asymptotic
behaviour of the coefficients ak (r) as r → ∞. Indeed, convergence of
the series Sr (z) extends further and further left in the complex plane
as r increases, but just how
∞
√ z+1/2 −(z+r+1/2)
X ′
lim 2π (z + r + 1/2) e ak (r)Hk (z)
r→∞
k=0
∞
X ′ 2 /r
= 2 lim r z (−1)k e−k Hk (z)
r→∞
k=0
141
Chapter 11. Conclusion and Future Work
0.4
−0.4
0 0
z r
3 3
Figure 11.1: ǫr,n (z), −0.5 < r < 3, −0.5 < z < 3
is defined to be a zero of ǫr,n (z), equation (10.5) and Figure 11.1 lead
one to conjecture that r(z) is a multivalued complex analytic function
with number of branches equal to the number of zeros of ǫ∞ r,n .
The data from Tables C.1 and C.2 suggests that neither the number
nor the distribution of ǫ∞r,n zeros follows a simple law. Even more per-
plexing is the striking similarity between the functions ǫ∞r,n and an+1 (r)
as illustrated in Figures 8.4, 8.5 and 8.6. This same behaviour was
142
Chapter 11. Conclusion and Future Work
Thus given ǫ > 0, one should choose n = ⌈−2 − 0.3 log ǫ⌉. The corre-
sponding r should then be chosen to be the largest zero of ǫ∞ r,n , which
is about size n. This choice of r ≈ n is consistent with shelf behaviour
demonstrated in Figures 1.5, 1.6 and 1.7. In each case, the transition
in decay rate of the ak (r) occurs at the shelf where r is approximately
equal to n.
143
Chapter 11. Conclusion and Future Work
210
− log Mr(n),n,15
6.8+3.3n
0
0 n 60
144
Bibliography
[1] T. Apostol. Introduction to Analytic Number Theory. Springer,
New York, 1998.
[3] J.P. Boyd. Chebyshev and Fourier Spectral Methods. Dover Pub-
lications, Inc., New York, 2000.
145
Bibliography
[21] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling.
Numerical Recipes in C, Second Edition. Cambridge University
Press, Cambridge, UK, 1992.
146
Bibliography
147
Appendix A
Preliminary Steps of Lanczos
Derivation
148
Appendix B
A Primer on Chebyshev
Polynomials
d2 y dy
(1 − x2 ) 2
− x + n2 y = 0 .
dx dx
T0 (x) = 1
T1 (x) = x
149
Appendix B. A Primer on Chebyshev Polynomials
T2 (x) = 2 x2 − 1
T3 (x) = 4 x3 − 3 x
T4 (x) = 8 x4 − 8 x2 + 1
T5 (x) = 16 x5 − 20 x3 + 5 x .
which must be taken into account for manipulations in the Hilbert space
setting.
The link between Chebyshev and Fourier series is the transformation
x → cos θ. This transforms f (x) into an even function of θ which can
then be written as a Fourier cosine series.
As noted, there are many identities involving Chebyshev polynomi-
als which follow from their interpretation as trigonometric functions.
The important ones for our purposes are:
√
T2n ( 1 − x2 ) = (−1)n T2n (x) (B.3)
150
Appendix B. A Primer on Chebyshev Polynomials
151
Appendix C
Empirical Error Data
was estimated using the first 5 and then 15 terms of the sum. These
columns are labeled Mr(n),n,5 and Mr(n),n,15 , respectively. In addition,
Pn+15
the location of the maximum of
n+1 ak (r)Hk (it) was determined;
this data is in the last column labeled tmax .
152
Appendix C. Empirical Error Data
153
Appendix C. Empirical Error Data
154