I explain his method in this post; to simplify the presentation, I use less sophisticated bounds on θ(x) than Chebyshev. As an application, I calculate numerically the sum
∞∑p=21plogp
with explicit bound on the obtained error.
The method is to calculate the sum up to an integer q and estimate the tail using bounds on the function θ. I thus write S=∞∑p=2f(p)=∑p≤qf(p)+∑q<pf(p)
Partial summation gives
S=Sq+Rq
with
Sq=∑p≤qf(p)−θ(q)f(q)logqwithRq=−∫+∞qθ(x)(f(x)logx)′ dx
Application
As a concrete example, take f(x)=1xlogx
then
(f(x)logx)′=−logx+2x2log3x
thus
Rq=∫+∞qθ(x)logx+2x2log3x dx
I now use the following lower and upper bound on θ(x): for all real x≥1
θ1(x)≤θ(x)≤θ2(x)
with
θ1(x)=Ax−5logx−125Ax1/2−52log6log2xθ2(x)=65Ax+5log6log2xA=12log2+13log3+15log5−130log30
These inequalities can be obtained using similar methods as in this blog post. Therefore
∫+∞qθ1(x)logx+2x2log3x dx≤Rq≤∫+∞qθ2(x)logx+2x2log3x dx
For example, if q=1000, then Sq=1.472 (3 decimal points). Numerical integration gives 0.148≤Rq≤0.187.
Thus 1.620≤S≤1.659. This is consistent with the exact value S≈1.637 which can be found on page 6 in the paper "High precision computation of Hardy-Littlewood constants" of Henri Cohen.
I illustrate the formulas in Mathematica.
The code to calculate Sq |
![]() |
The code to bound the error Rq |
![]() |
![]() |
The blue area is the approximation; the red line is the exact value. |
converges, whereas
∞∑n=2f(n)(sum over integers)
diverges. If ∑∞n=2f(n) converges, then it is easier to crudely approximate the tail.
Other posts about Chebyshev's mémoire
No comments:
Post a Comment