Russian Mathematical Surveys
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Uspekhi Mat. Nauk:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Russian Mathematical Surveys, 2024, Volume 79, Issue 1, Pages 53–126
DOI: https://doi.org/10.4213/rm10162e
(Mi rm10162)
 

Voronoi's formulae and the Gauss problem

D. A. Popov

Lomonosov Moscow State University, Belozersky Research Institute of Physico-Chemical Biology
References:
Abstract: We present classical and new results on the remainder term in the circle problem. The proofs are based on the application of various versions of Voronoi's formula.
Bibliography: 54 titles.
Keywords: the circle problem, Voronoi's formula, long and short intervals.
Received: 18.12.2023
Russian version:
Uspekhi Matematicheskikh Nauk, 2024, Volume 79, Issue 1(475), Pages 59–134
DOI: https://doi.org/10.4213/rm10162
Bibliographic databases:
Document Type: Article
UDC: 511.335+511.338
MSC: Primary 58J50; Secondary 11F72
Language: English
Original paper language: Russian

Introduction

Let $A(x)$ be the number of integer points in the disc of radius $\sqrt{x}$ . The quantity $P(x)$ (the remainder term in the Gauss circle problem) is defined by

$$ \begin{equation} A(x)=\pi x+P(x). \end{equation} \tag{1} $$

The circle problem (or Gauss circle problem) is to prove the estimate

$$ \begin{equation} P(x)=O(x^{1/4+\varepsilon})\quad \forall\,\varepsilon>0\quad (x \to \infty). \end{equation} \tag{2} $$

Below, by the Gauss problem we mean the general problem of investigating the properties of the quantity $P(x)$ as $x \to \infty$.

The quantity $A(x)$ can be written in the form

$$ \begin{equation} A(x)=\sum_{0 \leqslant n \leqslant x} r(n), \end{equation} \tag{3} $$
where $r(n)$ is the number of representations of an integer $n$ as a sum of the squares of two integers. So $P(x)$ is a piecewise-linear function with discontinuities of the first kind at $x=n$, where $n$ is such that $r(n) \ne 0$. Note that
$$ \begin{equation} \begin{gathered} \, r(n) \leqslant \overline{r}(n)\quad (n \geqslant n_0)\quad\text{and}\quad \overline{r}(x)=\exp\biggl\{\frac{\ln x}{\ln_2 x}\biggr\}, \\ \text{where } \ln_k(n)=\underbrace{\ln\ln \ldots\ln}_k(n). \end{gathered} \end{equation} \tag{4} $$

In works on analytic number theory, the Gauss problem is typically considered together with the Dirichlet problem and the problem of studying the quantity $E(T)$. The Dirichlet problem deals with the quantity $\Delta(x)$ defined by

$$ \begin{equation} D(x)=x(\ln x+2\gamma-1)+\Delta(x), \end{equation} \tag{5} $$
where $\gamma$ is the Euler constant, and
$$ \begin{equation} D(x)=\sum_{n \leqslant x} d(n)\qquad (d(n)\text{ is the number of divisors of } n). \end{equation} \tag{6} $$
The quantity $E(T)$ is defined by
$$ \begin{equation*} \int_0^T\biggl|\zeta\biggl(\frac{1}{2}+it\biggr)\biggr|^2\,dt= T\ln\frac{T}{2\pi}+(2\gamma-1)T+E(T), \end{equation*} \notag $$
where $\zeta(\,\cdot\,)$ is the Riemann zeta function.

There is a deep connection between these three problems, hence most results obtained for one of them can easily be transferred to the other two. The Dirichlet problem (divisor problem) and the corresponding problem for $E(T)$ consist in proving the estimates

$$ \begin{equation} \Delta(x)=O(x^{1/4+\varepsilon})\quad\text{and}\quad E(T)=O(T^{1/4+\varepsilon})\qquad (T \to \infty). \end{equation} \tag{7} $$

In the present paper we consider the quantity $P(x)$. This is explained by the fact that the Gauss problem is closely related to a number of other problems, including the following ones:

In studies of the Gauss problem two directions can be distinguished.

The main objective of the first direction is to inprove estimates of the form

$$ \begin{equation} P(x)=O(x^{\theta+\varepsilon})\quad \forall\,\varepsilon>0\quad (x \to \infty). \end{equation} \tag{8} $$
Let us briefly discuss the general scheme of research in the first direction. At the first step the problem is reduced to estimating trigonometric sums. This is done via Voronoi’s formulae, Landau’s formulae (see Chapter I), or elementary formulae of the form
$$ \begin{equation*} P(x)=-8\sum_{1\leqslant j\leqslant \sqrt{x/2}} \psi(\sqrt{x-j^2}\,)+O(1) \end{equation*} \notag $$
and
$$ \begin{equation*} \Delta(x)=-2\sum_{n\leqslant \sqrt{x}}\psi\biggl(\frac{x}{n}\biggr)+O(1), \end{equation*} \notag $$
where $\psi(x)=x-[x]-1/2$ ($[x]$ is the integer part of $x$). Next, the estimate
$$ \begin{equation*} \begin{gathered} \, \sum_{a\leqslant n\leqslant b}\psi(f(n)) \ll NJ^{-1}+ \sum_{1\leqslant j\leqslant J}j^{-1}\biggl|\,\sum_{a\leqslant n\leqslant b} e^{2\pi ij f(n)}\biggr| \\ (N \leqslant a<b\leqslant 2N) \end{gathered} \end{equation*} \notag $$
is used, where $J$ depends on $N$ and the form of the function $f$.

The resulting trigonometric sums are estimated using various versions of the van der Corput method or the Bombieri–Iwaniec method.

Researches related to the first direction have a long history. By 1986, using the two- dimensional version of the van der Corput method, estimate (8) was proved for

$$ \begin{equation*} \theta=\dfrac{139}{429}=0.324001\ldots\,. \end{equation*} \notag $$
In 1986 a new method for estimating trigonometric sums (the Bombieri–Iwaniec method) was proposed. In 1988, with the help of this method, estimate (8) was proved for
$$ \begin{equation*} \theta=\frac{7}{22}=0.318181\ldots\,. \end{equation*} \notag $$
This estimate was slightly improved by a further development of the Bombieri–Iwaniec method, and in 2023 estimate (8) for
$$ \begin{equation*} \theta=\dfrac{517}{1648}=0.314483\ldots \end{equation*} \notag $$
was put forward. It is believed that the smallest $\theta$ possible in the Bombieri–Iwaniec method is
$$ \begin{equation*} \theta=\frac{5}{16}=0.3125. \end{equation*} \notag $$

The aim of the second direction is to study various properties of the quantity $P(x)$ by use of Voronoi’s formulae. In this way, no complicated methods for estimating trigonometric sums are used. Results in this direction are capable of establishing a connection between estimates for the quantities $|P(x)|$ and various characteristics of the behaviour of this quantity as $x \to \infty$, which can serve as a source of new approaches to the Gauss problem. In addition, these results can prove useful in the above problems of number theory and spectral theory.

From the truncated Voronoi formula one can easily obtain estimate (8) for $\theta=1/3$. Estimates (8) for $\theta\geqslant 1/3$ will be called trivial. The problem of delivering a non-trivial estimate (8) (for $\theta \leqslant 1/3-\delta$, $\delta>0$) without using estimates for trigonometric sums remains open. Karatsuba believed this impossible. The available results support this claim.

The purpose of our paper is to give a closed and fairly complete presentation of the results concerning various properties of the quantity $P(x)$. Thus, the results presented below can be looked upon as ones in the second direction. All these results were obtained using Voronoi’s formulae.

The proofs of Voronoi’s formulae are presented in Chapter I, which is organized as follows.

Voronoi’s formula

$$ \begin{equation} P(x)=\sqrt{x}\,\sum_{j=1}^\infty \frac{r(j)}{\sqrt{j}}\, J_1(2\pi \sqrt{jx}\,) \end{equation} \tag{9} $$
($x$ is not an integer $n$ such that $r(n) \ne 0$ and $J_\nu(\,\cdot\,)$ is a Bessel function) was discovered in 1905. The arguments used by Voronoi to deliver this formula were not rigorous, and Voronoi himself noted that new ideas were required for rigorous justification.

It can easily be shown that, for any sufficiently smooth and rapidly decreasing function $g(x)$, $x \in \mathbb{R}_+$,

$$ \begin{equation} \sum_{n=0}^\infty r(n)g(n)=\pi\sum_{n=0}^\infty r(n)\int_0^\infty g(t) J_0(2\pi \sqrt{nt}\,)\,dt. \end{equation} \tag{10} $$
This formula will be referred to as the regularized Voronoi formula. A formal application of (10) to
$$ \begin{equation*} g(t)=\begin{cases} 1, & t \leqslant x, \\ 0, & t>x, \end{cases} \end{equation*} \notag $$
produces Voronoi’s formula.

The first rigorous proof of Voronoi’s formula (9) was given by Hardy in 1916. This proof, which is quite involved, depends on the subtle Riesz theorem on the analytic properties of certain Dirichlet series. Since then, several different proofs of Voronoi’s formula have appeared, and all of them are fairly involved. The proof in § 4, which lies entirely in the realm of real analysis, depends on Landau’s formula (see § 2) and the Landau–Hardy identity (see § 3). This proof enables us to study the character of the convergence of the series on the right-hand side of (9) and to describe the Gibbs phenomena which appear for $x=n$, where $n$ is such that $r(n) \ne 0$.

For applications one must replace the series in Voronoi’s formula (9) by a finite sum. This can be done using Perron’s formula.

Let $\zeta_k(s)$ be the generating Dirichlet series for the sequence of quantities $r(n)$:

$$ \begin{equation} \zeta_k(n)=\sum_{n=1}^\infty \frac{r(n)}{n^s}\,,\qquad s=\sigma+it,\quad \sigma>1. \end{equation} \tag{11} $$
By Perron’s formula
$$ \begin{equation} \begin{gathered} \, \frac{1}{2}\bigl(A(x+0)+A(x-0)\bigr)=\frac{1}{2\pi i} \int_{b-i\infty}^{b+i\infty}\zeta_k(s)x^s\,\frac{ds}{s}\,, \\ \nonumber b>1,\quad s=\sigma+it. \end{gathered} \end{equation} \tag{12} $$
The required version of the truncated Perron formula is proved in Appendix A. In this formula, the integral over the straight line $\sigma=b$ in (12) is replaced by the integral over its interval $\sigma=b$, $|t|<T$. The notation $\zeta_k(s)$ for the sum on the right in (11) is because $\zeta_k(s)$ is the Dedekind zeta function of the field
$$ \begin{equation*} k=\mathbb{Q}(i)\qquad \end{equation*} \notag $$
(recall that $z \in \mathbb{Q}(i)$ if $z=x+iy$, $x,y \in \mathbb{Q})$. The ring of integers $\mathbb{Z}(i) \subset \mathbb{Q}(i)$ (the ring of Gaussian integers $z \in \mathbb{Z}(i)$, $z=x+iy$, $x,y \in \mathbb{Z}$) is a principal ideal ring. The integral ideals $\mathfrak u$ of the field $\mathbb{Q}(i)$ are in a one-to-one correspondence with the elements $z \in \mathbb{Z}(i)$ such that $x^2+y^2=n$, $r(n) \ne 0$. The norm $N(\mathfrak u)$ of the ideal $\mathfrak u=\mathfrak u(z)$ is $x^2+y^2$, and therefore
$$ \begin{equation} A(x)=\sum_{N(\mathfrak u)\leqslant x} 1. \end{equation} \tag{13} $$
By the arithmetics of the ring $\mathbb{Z}(i)$,
$$ \begin{equation} r(n)=4\sum_{d|n}\chi_4(d)=4\bigl(d_1(4k+1)-d_1(4k+3)\bigr), \end{equation} \tag{14} $$
where $d_1(4k+1)$ is the number of divisors of $n$ of the form $4k+1$, $d_1(4k+3)$ is the number of divisors of $n$ of the form $4k+3$, and
$$ \begin{equation} \chi_4(d)=\begin{cases} 1, & d \equiv 1\!\!\!\pmod{4}, \\ -1, & d \equiv 3\!\!\!\pmod{4}, \\ 0, & d\text{ is even}, \end{cases} \end{equation} \tag{15} $$
is a non-trivial Dirichlet character modulo 4. From (14) and (11) it follows that
$$ \begin{equation} \zeta_k(s)=4\zeta(s)L(s|\chi_4)\qquad (k=\mathbb{Q}(i)). \end{equation} \tag{16} $$
In this formula $\zeta(\,\cdot\,)$ is the Riemann zeta function, and $L(s|\chi_4)$ is the Dirichlet $L$-function corresponding to the character $\chi_4$, that is,
$$ \begin{equation} L(s|\chi_4)=\sum_{n=1}^\infty \frac{\chi_4(s)}{n^s}\,,\qquad s=\sigma+it,\quad \sigma>1. \end{equation} \tag{17} $$
Note that estimate (4) for $r(n)$ follows from (14).

Perron’s formula is used in the proof of the truncated Voronoi formula, which asserts that

$$ \begin{equation} P(x)=-\frac{x^{1/4}}{\pi}\sum_{n=1}^N \frac{r(n)}{n^{3/4}} \cos\biggl(2\pi\sqrt{nx}+\frac{\pi}{4}\biggr)+\Delta_NP(x), \end{equation} \tag{18} $$
where
$$ \begin{equation} \Delta_NP(x) \ll \frac{x^{1/2+\varepsilon}}{\sqrt{N}}+N^\varepsilon\quad \forall\,\varepsilon>0. \end{equation} \tag{19} $$
It is worth pointing out that the truncated Voronoi formula is a basis of investigations of the quantity $P(x)$.

The proof of the truncated Voronoi formula (18) is presented in § 5. In passing, we refine estimate (19).

The generating Dirichlet series for $d(n)$ is the squared Riemann zeta function

$$ \begin{equation*} \zeta^2(s)=\sum_{n=1}^\infty \frac{d(n)}{n^s}\,,\qquad s=\sigma+it,\quad \sigma>1. \end{equation*} \notag $$
By Perron’s formula
$$ \begin{equation*} \frac12(D(x+0)+D(x-0))=\frac{1}{2\pi i}\int_{b-i\infty}^{b+i\infty} \zeta^2(s)x^s\,\frac{ds}{s}\,, \end{equation*} \notag $$
and the truncated Voronoi formula for the quantity $\Delta(x)$ (see (5)) reads
$$ \begin{equation} \Delta(x)=\frac{x^{1/4}}{\sqrt{\pi}}\sum_{n=1}^N \frac{d(n)}{n^{3/4}} \cos\biggl(4\pi\sqrt{nx}-\frac{\pi}{4}\biggr)+\Delta_N(\Delta(x)), \end{equation} \tag{20} $$
where
$$ \begin{equation} \Delta_N(\Delta(x)) \ll \frac{x^{1/2+\varepsilon}}{\sqrt{N}}+N^\varepsilon\quad \forall\,\varepsilon>0. \end{equation} \tag{21} $$
A comparison of (18) and (20) shows that all results based on the truncated Voronoi formula for $\Delta(x)$ can be carried over to $P(x)$, and vice versa.

Let us briefly describe the content of Chapters II and III.

In Chapter II we consider estimates for some quantities which characterize the behaviour of $P(x)$ on long intervals. An interval $I \subset [0,2T]$ is called long if its length $|I|$ satisfies $|I|>CT$.

In § 6 we consider the asymptotic behaviour, as $T \to \infty$, of the moments

$$ \begin{equation*} M_k(T)=\int_0^T P^k(x)\,dx. \end{equation*} \notag $$
Only the first and second moments are considered in detail. Results on higher moments are presented without proofs. In § 7 a classical $\Omega$-estimate is established to the effect that
$$ \begin{equation*} P(x)=\Omega\bigl(x^{1/4}(\ln x)^{1/4}\bigr). \end{equation*} \notag $$
More strong and novel results are presented without proofs.

In § 8 we discuss sign changes in the quantity $P(x) \pm ax^{1/4}$ ($a>0$).

In § 9 new results on the distribution of values of the quantity $x^{-1/4}P(x)$ are presented without proofs.

In Chapter III we study the quantities related to the behaviour of $P(x)$ on short intervals. An interval $I \subset [0,2T]$ is called short if a two-sided estimate $|I|\asymp CT^\beta$, $\beta<1$, holds. The most interesting case is $\beta=1/2$. Chapter III is concerned with the relation between estimates for the quantities under consideration and ones for $|P(x)|$.

In § 10 we consider the local means

$$ \begin{equation*} E_k(T,H)=\frac{1}{2H} \int_{T-H}^{T+H}P^k(x)\,dx \end{equation*} \notag $$
of the quantities $P^k(x)$ for $k=1,2,4,6$.

In § 11 we prove a theorem enabling one (under certain assumptions) to estimate the quantity $|P(x)|$ from available estimates for $E_k(T,H)$.

In § 12 the properties of the Jutila integral

$$ \begin{equation*} Q(T,U,H)=\int_{T}^{T+H}\bigl(P(x+U)-P(x)\bigr)^2\,dx \end{equation*} \notag $$
are studied under various constraints on the quantities $U$ and $H$. The results that follow generalize and refine Jutila’s classical result.

In § 13 we consider the Jutila integral

$$ \begin{equation*} Q_{\rm M}(T,U,H)=\int_{T}^{T+H}\max_{0\leqslant v\leqslant U} \bigl(P(x+v)-P(x)\bigr)^2\,dx. \end{equation*} \notag $$

In § 14 we present several results relating estimates for the quantities $|P(x)- P(x)|$ ($0<x<U$) to ones for $|P(T)|$.

In the concluding section, § 15, we discuss the latest results on the behaviour of $P(x)$ on a long interval $[T,2T]$. These results solve the Gauss problem on some set $V \subset [T,2T]$ of measure $\mu\{V\}>CT$ ($C<1$).

Based on these results, we formulate conjectures on the structure of the set $S$ on which $|P(x)|>CT^{1/4}$ ($S \subset [T,2T]$). These conjectures are supported by numerical experiments (see the comments at the end of Chapter III).

In the concluding part of the paper we formulate some problems and conjectures, followed by two appendices. In Appendix A we prove the required variant of the truncated Perron formula. In Appendix B we prove a general theorem providing (under certain conditions) pointwise estimates for a function from available appropriate estimates of its local moments.

The notation is standard throughout the text. We write $f(x)\ll q(x)$ or $f(x)=O(q(x))$ ($q(x)>0$) if $|f(x)|<Cq(x)$ for $x>x_0$, where $C$ and $x_0$ can be specified explicitly. By $\varepsilon$ we mean an arbitrary positive quantity, which can be taken to be arbitrarily small.

An estimate of the form $f(x)=O(x^{\alpha+\varepsilon})$ ($f(x) \ll x^{\alpha+\varepsilon}$) means that $|f(x)|<C(\varepsilon)x^{\alpha+\varepsilon}$, and $f\in {\varepsilon}$ means that $f(x)=O(x^\varepsilon$ as $x\to \infty$, $f$ is non-decreasing for sufficiently large $x$, and $f(x)>1$.

The introduction and each chapter are concluded with commentary, which also contains bibliographical references with short comments.

Remarks

1. For the Gauss and Dirichlet problems, see [1]–[5]. For more recent results, see [6] and [7].

2. Estimate (4) follows from the asymptotic estimate (see [8])

$$ \begin{equation*} r(n) \leqslant \exp\biggl\{\ln 2\,\frac{\ln n}{\ln\ln n}+ O\biggl(\frac{\ln n \ln_3 n}{(\ln_2 n)^2}\biggr)\biggr\}. \end{equation*} \notag $$

3. The Bombieri–Iwaniec method was proposed in [9]. For a modern account of the van der Corput and Bombieri–Iwaniec methods, see [10]. For the Bombieri–Iwaniec method, see also the book [11], which discusses the application of this method to the problem of the number of integer points inside convex contours in $\mathbb{R}^2$.

4. Estimates (8) have been studied extensively. For the above estimates, see [12]–[14].

5. Voronoi’s formula (more precisely, a generalized version of it) can be found in [15] (see also the Linnik’s comments on that paper). For Hardy’s proof of Voronoi’s formula, see [16].

6. A proof of the truncated Voronoi formula for $\Delta(x)$ can be found in [2] and [3].

7. For an account of the required results on rings of integer algebraic numbers and the Dedekind zeta functions, see [17] and [18].

The author wishes to express his deep gratitude to M. A. Korolev for his help and support.

Chapter I. Voronoi’s formulae

1. Regularized Voronoi formula

In this section we prove formula (10). We also study its relationships with Voronoi’s formula (9) and spectral theory.

Theorem 1. Let $g$ be a continuous bounded function on $[0,\infty]$, and let

$$ \begin{equation} g(t)=O(t^{-1-\delta}) \quad (\delta>0), \quad t \to \infty. \end{equation} \tag{1.1} $$
Then the regularized Voronoi formula holds:
$$ \begin{equation} \sum_{n=0}^\infty r(n)g(n)=\pi \sum_{j=0}^\infty r(j)\int_0^\infty g(t) J_0(2\pi \sqrt{jt}\,)\,dt \end{equation} \tag{1.2} $$
provided that the series on the right-hand side is convergent.

Proof. We use the two-dimensional Poisson formula in the form
$$ \begin{equation} S_1=S_2, \end{equation} \tag{1.3} $$
where
$$ \begin{equation*} S_1:=\sum_{n\in \mathbb{Z}^2}f(n)\quad\text{and}\quad S_2:=\sum_{n\in \mathbb{Z}^2}\widehat{f}(2\pi n),\qquad n=(n_1,n_2),\quad n_i\in \mathbb{Z}, \end{equation*} \notag $$
and
$$ \begin{equation} \widehat{f}(\omega)=\int_{\mathbb{R}^2} e^{i\langle \omega,x\rangle}f(x)\,dx, \end{equation} \tag{1.4} $$
where
$$ \begin{equation*} x=(x_1,x_2),\quad \omega=(\omega_1,\omega_2),\quad\text{and}\quad \langle \omega,x\rangle=\omega_1 x_1+\omega_2 x_2. \end{equation*} \notag $$
Formula (1.3) holds under the following conditions:

(1) the function $f$ is bounded and continuous;

(2) $|f(x)|=O(|x|^{-1-\delta})$ ($\delta>0$, $|x|=\sqrt{x_1^2+x_2^2}$ );

(3) the series $S_2$ converges.

Let $f(x)=g(|x|^2)$. Then $S_1=\sum_{n=0}^\infty g(n)r(n)$; and this series converges by (1.1). Changing to the polar coordinates $x=s\eta$, where $s=|x|$ and $\eta=(\cos\varphi,\sin\varphi)$, and using the formula

$$ \begin{equation*} \int_0^{2\pi}e^{i\langle a,\eta\rangle}\,d\varphi=2\pi J_0(|a|),\qquad a=(a_1,a_2), \end{equation*} \notag $$
we find that $\widehat{f}(2\pi n)=\pi\displaystyle\int_0^\infty g(t^2)J_0(2\pi nt)t\,dt$, and therefore
$$ \begin{equation*} S_2=\sum_{j=0}^\infty r(j)\int_0^\infty g(t) J_0(2\pi\sqrt{jt}\,)\,dt. \end{equation*} \notag $$
The series $S_2$ converges in view of condition (3), and equality (1.2) follows from (1.3) for $f(x)=g(|x|^2)$. This proves Theorem 1.1.

On the formal level Voronoi’s formula (9) follows directly from (1.2). In fact, it suffices to set

$$ \begin{equation*} g(r)=\begin{cases} 1, & r \leqslant \sqrt{x}\,, \\ 0, & r > \sqrt{x}\,, \end{cases} \end{equation*} \notag $$
and take into account that
$$ \begin{equation*} \int_9^a yJ_0(y)\,dy=aJ_1(a),\qquad J_0(0)=1. \end{equation*} \notag $$
A hint for the rigorous proof of Voronoi’s formula is the equality
$$ \begin{equation*} \sum_{n=0}^\infty r(n)e^{-s\sqrt{n}}=2\pi s \sum_{j=0}^\infty \frac{r(j)}{(s^2+4\pi^2 j)^{3/2}}\,,\qquad s=\sigma+it,\quad \sigma>0, \end{equation*} \notag $$
which is a consequence of (1.2) for $g(x)=e^{-s\sqrt{x}}$. In what follows we need Perron’s formula, which asserts that if
$$ \begin{equation} f(s)=\sum_n a_n e^{-\lambda_ns}\quad\text{and}\quad \lambda_n<\omega<\lambda_{n+1},\quad s=\sigma+it, \end{equation} \tag{1.5} $$
then
$$ \begin{equation} \sum_{\lambda_n<\omega}a_n=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} f(s)e^{\omega s}\,\frac{ds}{s}\qquad (\sigma>c>0). \end{equation} \tag{1.6} $$
We set $f(s)=\sum_n r(n)e^{-s\sqrt{n}}$ and change formally the order of integration and summation. Now Voronoi’s formula follows on noting that
$$ \begin{equation*} \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} \frac{e^{\mu s}}{(s^2+1)^{3/2}}\,ds=J_1(\mu). \end{equation*} \notag $$

Let $f(s)=\sum_n r(n)e^{-sn}$. Using the equalities

$$ \begin{equation*} \int_0^\infty e^{-\alpha x}J_0(\beta x)\,dx= \alpha^{-1}e^{-\beta^2/(4\alpha)} \end{equation*} \notag $$
and
$$ \begin{equation*} \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}s^{-\nu} e^{X s-y^2/(4s)}\,\frac{ds}{s}=\frac{2^\nu X^{\nu/2}}{y^\nu}\, J_\nu(y\sqrt{x}\,) \end{equation*} \notag $$
and proceeding as above, we arrive at Voronoi’s formula again.

The regularized Voronoi formula and the Gauss problem have a natural interpretation in spectral theory. Consider a closed two-dimensional manifold $M$ with Riemannian metric $g$. The Laplace operator $D=D(g)$ is negative definite and has a purely discrete spectrum ($D\varphi_n=-\lambda_n \varphi_n$, $\lambda_n\geqslant 0$). Let $N(x)$ be the number of eigenvalues $\lambda_n$ such that $\lambda_n \leqslant x$. We also recall Weyl’s formula

$$ \begin{equation} N(x)=Bx+\Delta N(x),\qquad \Delta N(x)=O(x^{1/2}). \end{equation} \tag{1.7} $$
Let $M$ be the flat torus $\mathbb{T}^2$ (the square with side length $2\pi$ with identified opposite sides which is equipped with the Euclidean metric). Then $D=\partial_x^2+\partial_y^2$, $\lambda_n=l^2+m^2$ ($l,m \in \mathbb{Z}$), $\varphi_n(x,y)=e^{i(lx+my)}$, and $r(n)$ is the multiplicity of the eigenvalue $\lambda_n$. In this case, in Weyl’s formula (1.7) we have
$$ \begin{equation} N(x)=A(x),\quad B=\pi,\quad\text{and}\quad \Delta N(x)=P(x). \end{equation} \tag{1.8} $$
The operator $D$ is not of trace class and has no trace. The operator $f(D)$ is of trace class if the function $f(x)$ decreases sufficiently rapidly as $x \to \infty$. In this case
$$ \begin{equation} \begin{aligned} \, f(D)\varphi(x)&=\int_{\mathbb{T}^2}K(x,w)\varphi(w)\,d\mu(w), \\ f(D)\varphi_n&=f(\lambda_n)\varphi_n\quad (\mu \text{ is an invariant measure on $\mathbb{T}^2$}). \end{aligned} \end{equation} \tag{1.9} $$
The operator $f(D)$ has the trace
$$ \begin{equation} \operatorname{Tr}f(D)=\sum_{n=0}^\infty f(\lambda_n), \end{equation} \tag{1.10} $$
and the trace formula holds:
$$ \begin{equation} \operatorname{Tr}f(D)=\sum_{n=0}^\infty r(n)f(n)= \int_{\mathbb{T}^2}K(z,z)\,d\mu(z). \end{equation} \tag{1.11} $$
It can be shown that
$$ \begin{equation} \int_{\mathbb{T}^2}K(z,z)\,d\mu(z)=\pi\sum_{n=0}^\infty r(n) \int_0^\infty g(t)J_0(2\pi \sqrt{nt}\,)\,dt. \end{equation} \tag{1.12} $$
So the regularized Voronoi formula is the trace formula (1.11) for the operator $f(D)$ on $\mathbb{T}^2$. This suggests an analogy between the regularized Voronoi formula and Selberg’s formula, which follows from the trace formula for the Laplace operator on $\Gamma \setminus H$, where $\Gamma$ is a cocompact group of motions of the Lobachevskii plane $H$. This analogy goes quite far. In particular, there is an analogue of Voronoi’s formula for the second term in Weyl’s formula (1.7).

2. Landau’s formula

Landau’s formula is obtained by formal termwise integration from 0 to $x$ of the series on the right-hand side of (9). In this section we give a proof of Landau’s formula which does not rely on Voronoi’s formula. The starting point here is the following version of Poisson’s formula:

$$ \begin{equation} \sum_{-\beta \leqslant n \leqslant \beta}f(n)=\sum_{k=-\infty}^{+\infty} \int_{-\beta}^\beta f(t)\cos(2\pi kt)\,dt. \end{equation} \tag{2.1} $$
It is assumed that $f$ is an even function on $[-\beta,\beta]$, which has a bounded variation and is continuous at the points $t=n \in \mathbb{Z}$.

We set

$$ \begin{equation} P_1(x)=\int_0^x P(y)\,dy. \end{equation} \tag{2.2} $$

Theorem 2. Landau’s formula holds:

$$ \begin{equation} P_1(x)=\frac{x}{\pi}\sum_{n=1}^\infty \frac{r(n)}{n}\,J_2(2\pi \sqrt{nx}\,), \end{equation} \tag{2.3} $$
where the series on the right converges absolutely and uniformly for $x>1$.

Proof. Consider the quantity
$$ \begin{equation} Q(x)=\int_0^x A(y)\,dy \end{equation} \tag{2.4} $$
and define $\Phi(x,u)$ by
$$ \begin{equation} \Phi(x,u)=\sum_{|n|\leqslant \sqrt{x-u^2}}(x-u^2-n^2). \end{equation} \tag{2.5} $$
Hence
$$ \begin{equation} Q(x)=\sum_{|m|\leqslant x}\Phi(x,m). \end{equation} \tag{2.6} $$

Applying (2.1) to the sum (2.5) we find that

$$ \begin{equation*} \Phi(x,u)=\sum_{b=-\infty}^{+\infty}\int_{\sqrt{x-u^2}}^{\sqrt{x+u^2}} \cos(2\pi bv)\cos(x-u^2-v^2)\,dv. \end{equation*} \notag $$

Substituting this expression into (2.6) and applying (2.1) yields

$$ \begin{equation} Q(x)={\sum_a\,\sum_b} Q(x,a,b), \end{equation} \tag{2.7} $$
where
$$ \begin{equation*} Q(x,a,b)=\int_{u^2+v^2 \leqslant x}\cos(2\pi au) \cos(2\pi bv)(x-u^2-v^2)\,du\,dv. \end{equation*} \notag $$

Note that

$$ \begin{equation} Q(x,0,0)=\frac{\pi}{2}\,x^2. \end{equation} \tag{2.8} $$
Let us show that
$$ \begin{equation} Q(x,a,b)=\frac{x}{\pi}\, \frac{J_2(2\pi \sqrt{x}\,\sqrt{a^2+b^2}\,)}{a^2+b^2}\,. \end{equation} \tag{2.9} $$
To do this we write
$$ \begin{equation} Q(x,a,b)=\int_0^x P(u,a,b)\,du, \end{equation} \tag{2.10} $$
where
$$ \begin{equation} P(u,a,b)=\int_{u^2+v^2 \leqslant y} \cos\bigl(2\pi (au+bv)\bigr)\,dy\,dv. \end{equation} \tag{2.11} $$
Let us change the variables in the integral (2.11) from $(u,v)$ to $(\xi,\eta)$, where
$$ \begin{equation*} \xi=\frac{au+bv}{\sqrt{a^2+b^2}}\quad\text{and}\quad \eta=\frac{-bu+av}{\sqrt{a^2+b^2}}\,. \end{equation*} \notag $$
Since
$$ \begin{equation*} J_1(z)=\frac{2z}{\pi}\int_0^1\sqrt{1-y^2}\,\cos(zy)\,dy, \end{equation*} \notag $$
we have
$$ \begin{equation} P(x,a,b)=\sqrt{\frac{x}{a^2+b^2}}\,J_1(2\pi \sqrt{x}\, \sqrt{a^2+b^2}\,), \end{equation} \tag{2.12} $$
and therefore
$$ \begin{equation} Q(x,a,b)=\frac{2}{\sqrt{a^2+b^2}}\int_0^{\sqrt{x}} t^2J_1(2\pi t\,\sqrt{a^2+b^2}\,)\,dt. \end{equation} \tag{2.13} $$
Equality (2.9) follows from (2.13), since
$$ \begin{equation*} z^2J_1(z)=\frac{d}{dz}\bigl(z^2 J_2(z)\bigr). \end{equation*} \notag $$
Substituting (2.9) into (2.7) and using equality (2.8) we have
$$ \begin{equation} Q(x)=\frac{\pi}{2}\,x^2+\frac{x}{\pi}\sum_{n=1}^\infty\frac{r(n)}{n}\, J_2(2\pi\sqrt{nx}\,). \end{equation} \tag{2.14} $$
Now the required result (2.3) follows from (2.14) since
$$ \begin{equation} Q(x)=\frac{\pi}{2}\,x^2+\int_0^x P(y)\,dy. \end{equation} \tag{2.15} $$
That the series on the right-hand side of (2.3) converges absolutely and uniformly for $x>1$ follows from the estimates
$$ \begin{equation} J_1(t) \leqslant \frac{C}{\sqrt{t}}\quad\text{for } t>1,\quad\text{and}\quad r(n)=O(n^\varepsilon). \end{equation} \tag{2.16} $$
Theorem 2.2 is proved.

3. Landau–Hardy identity

In this section we show that the quantity $P(x)$ satisfies a relation which we call the Landau–Hardy identity. Along with Landau’s formula, this identity underlies the proof of Voronoi’s formula (see § 4).

Consider the quantities

$$ \begin{equation} \begin{aligned} \, \overline{A}(x)&=\frac{1}{2}\bigl(A(x+0)+A(x-0)\bigr), \\ \overline{P}(x)&=\overline{A}(x)-\pi x, \\ P_X(x)&=\sqrt{x}\,\sum_{1 \leqslant n \leqslant X} \frac{r(n)}{\sqrt{n}}\,J_1(2\pi \sqrt{nx}\,). \end{aligned} \end{equation} \tag{3.1} $$

Theorem 3 (Landau–Hardy identity). For all $x>0$ and $X>0$,

$$ \begin{equation} \begin{aligned} \, \nonumber \overline{P}(x)&=P_X(x)+J_0(2\pi\sqrt{Xx}\,)- \sqrt{\frac{x}{X}}\,P(X)J_1(2\pi \sqrt{xX}\,) \\ &\qquad-\frac{\pi x}{X}\,P_1(X)J_2(2\pi \sqrt{Xx}\,)+ x\sum_{n=1}^\infty \frac{r(n)}{n}\int_{2\pi \sqrt{Xx}}^\infty J_3(t) J_2\biggl(t\,\sqrt{\frac{n}{x}}\,\biggr)\,dt, \end{aligned} \end{equation} \tag{3.2} $$
where $P_1(x)$ is defined in (2.2).

Proof. Consider the sum
$$ \begin{equation} S_X(x):=\sqrt{x}\,\sum_{0\leqslant n \leqslant X} \frac{r(n)}{\sqrt{n}}\,J_1(2\pi \sqrt{nx}\,)=\pi x+P_X(x). \end{equation} \tag{3.3} $$
Here the last equality holds since
$$ \begin{equation*} \lim_{n \to 0}\frac{J_1(2\pi\sqrt{nx}\,)}{\sqrt{n}}=\pi\sqrt{x}\,. \end{equation*} \notag $$
Let us apply Abel’s partial summation formula, which asserts that
$$ \begin{equation} S_X(x)=\sum_{0\leqslant n \leqslant X}r(n)f(n)=A(X)f(X)- \int_0^XA(\xi)\, \frac{d}{d\xi}\,f(\xi)\,d\xi. \end{equation} \tag{3.4} $$
In our case
$$ \begin{equation*} f(\xi)=\sqrt{\frac{x}{\xi}}\,J_1(2\pi \sqrt{n\xi}\,), \end{equation*} \notag $$
and so, using the relations
$$ \begin{equation*} \frac{d}{dz}\,\frac{J_1(z)}{z}=-\frac{1}{z}\,J_2(z),\quad J_1(z)=-\frac{d}{dz}\,J_0(z),\quad\text{and}\quad zJ_2(z)=J_1(z)-z\,\frac{d}{dz}\,J_1(z), \end{equation*} \notag $$
we find that
$$ \begin{equation} S_X(x)=1-J_0(2\pi\sqrt{Xx}\,)+P(X)\sqrt{\frac{x}{X}}\,J_1(2\pi \sqrt{Xx}\,)+ B(x,X), \end{equation} \tag{3.5} $$
where
$$ \begin{equation*} B(x,X)=\pi x\int_0^X P(\xi)\, \frac{1}{\xi}\,J_2(2\pi \sqrt{\xi x}\,)\,d\xi. \end{equation*} \notag $$
Consider the quantity $B(x,X)$. We have $P(\xi)=dP_1(\xi)/d\xi$, and integrating by parts and employing Landau’s formula (2.3)) we obtain
$$ \begin{equation} B(x,X)=\frac{\pi x}{X}\,J_2(\pi\sqrt{Xx}\,)P_1(X)- x\sum_{j=1}^\infty \frac{r(j)}{j}\int_0^X \xi J_2(2\pi \sqrt{j\xi}\,)\, \frac{d}{d\xi}\,\frac{J_2(2\pi \sqrt{\xi x}\,)}{\xi}\,d\xi. \end{equation} \tag{3.6} $$
We have
$$ \begin{equation*} \frac{d}{dz}\,\frac{J_2(z)}{z^2}=-\frac{1}{z^2}\,J_3(z), \end{equation*} \notag $$
and therefore the integral on the right-hand side of (3.6) can be written as
$$ \begin{equation*} \int_0^X \xi J_2(2\pi \sqrt{j\xi}\,)\,\frac{d}{d\xi}\, \frac{J_2(2\pi \sqrt{\xi x}\,)}{\xi}\,d\xi= -\int_0^{2\pi \sqrt{Xx}} J_2\biggl(z\sqrt{\frac{j}{x}}\,\biggr)J_3(z)\,dz. \end{equation*} \notag $$
Consequently,
$$ \begin{equation} \begin{aligned} \, \nonumber S_X(x)&=1-J_0(2\pi \sqrt{Xx}\,)+ P(X)J_1(2\pi \sqrt{Xx}\,)\sqrt{\frac{x}{X}} \\ &\qquad+\frac{\pi x}{X}\,J_2(2\pi \sqrt{Xx}\,)P_1(X)+\Sigma(X,x), \end{aligned} \end{equation} \tag{3.7} $$
where
$$ \begin{equation*} \Sigma(X,x)=x\sum_{j=1}^\infty \frac{r(j)}{j}\int_0^{2\pi\sqrt{Xx}} J_2\biggl(z\sqrt{\frac{j}{x}}\,\biggr)J_3(z)\,dz. \end{equation*} \notag $$
Note that
$$ \begin{equation*} \int_0^\infty J_2\biggl(z\sqrt{\frac{j}{x}}\,\biggr)J_3(z)\,dz=\begin{cases} j/x, & j<x, \\ 1/2, & j=x, \\ 0, & j>x. \end{cases} \end{equation*} \notag $$
Since
$$ \begin{equation*} \begin{aligned} \, \Sigma(X,x)&=x\sum_{j=1}^\infty \frac{r(j)}{j}\int_0^\infty J_2\biggl(z\sqrt{\frac{j}{x}}\,\biggr)J_3(z)\,dz \\ &\qquad-x\sum_{j=1}^\infty \frac{r(j)}{j}\int_{2\pi \sqrt{Xx}}^\infty J_2\biggl(z\sqrt{\frac{j}{x}}\,\biggr)J_3(z)\,dz, \end{aligned} \end{equation*} \notag $$
we have the equality
$$ \begin{equation} \begin{aligned} \, \nonumber \Sigma(X,x)&=x\sum_{j=1}^\infty \frac{r(j)}{j}\begin{cases} j/x, & j<x, \\ 1/2, & j=x, \\ 0, & j>x, \end{cases} \\ &\qquad-x\sum_{j=1}^\infty \frac{r(j)}{j} \int_{2\pi\sqrt{Xx}}^\infty J_2\biggl(z\sqrt{\frac{j}{x}}\,\biggr)J_3(z)\,dz. \end{aligned} \end{equation} \tag{3.8} $$
Substituting (3.8) into (3.7) we find that
$$ \begin{equation} \begin{aligned} \, \nonumber \overline{A}(x)&=\pi x+P_X(x) +J_0(2\pi \sqrt{Xx}\,) \\ \nonumber &\qquad+P(X)\sqrt{\frac{x}{X}}\,J_1(2\pi \sqrt{Xx}\,)+ \frac{\pi x}{X}\,P_1(X)J_2(2\pi \sqrt{Xx}\,) \\ &\qquad-x\sum_{j=1}^\infty \frac{r(j)}{j}\int_{2\pi \sqrt{Xx}}^\infty\, J_2\biggl(2\pi \sqrt{\frac{j}{x}}\,\biggr)J_3(z)\,dz, \end{aligned} \end{equation} \tag{3.9} $$
and now the Landau–Hardy identity (3.2) follows from the definition (3.1) of $\overline{P}(x)$. This proves Theorem 3.3.

4. Voronoi’s formula

In this section we prove Voronoi’s formula (see (9) in the introduction); we also describe the character of convergence of the series on the right-hand side of this formula.

Theorem 4 (Voronoi’s formula). For any $x>1$,

$$ \begin{equation} \overline{P}(x)=\sqrt{x}\,\sum_{j=1}^\infty\frac{r(j)}{\sqrt{j}}\, J_1(2\pi \sqrt{jx}\,), \end{equation} \tag{4.1} $$
where $\overline{P}(x)$ is defined in (3.1) and the series on the right-hand side of (4.1) converges uniformly on any closed interval $(a,b)$ which does not contain points $x=n$ ($r(n) \ne 0$).

In addition, if

$$ \begin{equation} P(x)=O(x^\gamma),\qquad \frac{1}{4}<\gamma \leqslant \frac{1}{2}\,, \end{equation} \tag{4.2} $$
then for all $N \geqslant 3$ and $x \geqslant 3$,
$$ \begin{equation} \overline{P}(x)=\sqrt{x}\,\sum_{j=1}^N\frac{r(j)}{\sqrt{j}}\, J_1(2\pi \sqrt{jx}\,)+\Delta(N,x) \end{equation} \tag{4.3} $$
and
$$ \begin{equation} \Delta(N,x)\ll\frac{1}{N^{1/4}x^{1/4}}+\frac{x^{3/4}}{N^{3/4-\gamma}}+ \begin{cases} 0, & x=n, \\ \overline{r}(x) \min\biggl\{1,\dfrac{\sqrt{x}}{\delta\sqrt{N}}\biggr\}, & x \ne n, \end{cases} \end{equation} \tag{4.4} $$
where $\overline{r}(x)$ is defined in (4) and $\delta=\|x\|$ is the distance of $x$ to the nearest integer $n$ such that $r(n) \ne 0$.

Proof. From the Landau–Hardy identity (3.2), for $X=N$ we obtain
$$ \begin{equation} \overline{P}(x)=\sqrt{x}\sum_{1 \leqslant n \leqslant N}\frac{r(n)}{\sqrt{n}}\, J_1(2\pi \sqrt{nx}\,)+\Delta(N,x), \end{equation} \tag{4.5} $$
where
$$ \begin{equation} \begin{aligned} \, \nonumber \Delta(N,x)&=J_0(2\pi \sqrt{Nx}\,)- \sqrt{\frac{x}{N}}\,P(N)J_1(2\pi \sqrt{N x}\,)- \frac{\pi x}{N}\,P_1(N)J_2(2\pi\sqrt{Nx}\,) \\ &\qquad+x\sum_{n=1}^\infty\frac{r(n)}{n}\int_{2\pi \sqrt{Nx}}^\infty J_3(t)J_2\biggl(t\,\sqrt{\frac{n}{x}}\,\biggr)\,dt. \end{aligned} \end{equation} \tag{4.6} $$
We set
$$ \begin{equation} \operatorname{sgn}x=\begin{cases} 1, & x>0, \\ 0, & x=0, \\ -1, & x<0; \end{cases}\quad\text{and}\quad g(x) \ \text{is the integer nearest to $x$}. \end{equation} \tag{4.7} $$
For all $\lambda>1$ and $x>1$,
$$ \begin{equation} \begin{aligned} \, \biggl|J_2(\lambda x)-\sqrt{\frac{2}{\pi}}\, \frac{\cos(\lambda x-\pi/4)}{\sqrt{\lambda x}}\biggr|&\leqslant \frac{C}{\lambda^{3/2}x^{3/2}}\,, \\ \biggl|J_3(x)-\sqrt{\frac{2}{\pi}}\, \frac{\cos(x+\pi/4)}{\sqrt{x}}\biggr|&\leqslant \frac{C}{x^{3/2}}\,. \end{aligned} \end{equation} \tag{4.8} $$
These estimates follow from the fact that for $x>1$ the asymptotic series for $J_\nu(x)$ are enveloping. Using (4.8)) we have
$$ \begin{equation} \biggl|\int_\omega^\infty J_2(\lambda x)J_3(x)\,dx+ \frac{\operatorname{sgn}(\lambda-1)}{\pi\sqrt{\lambda}} \int_{|\lambda-1|\omega}^\infty\frac{\sin t}{t}\,dt\biggr|\leqslant \frac{C}{\omega\sqrt{\lambda}}\biggl(1+\frac{1}{\lambda}\biggr) \end{equation} \tag{4.9} $$
for $\lambda>1$. From Landau’s formula (2.3), under condition (4.2) we obtain
$$ \begin{equation} \biggl|\frac{x}{N}\,P_1(N)J_2(2\pi \sqrt{Nx}\,)\,\biggr|\leqslant C\frac{x^{3/4}}{N^{3/4-\gamma}}\,. \end{equation} \tag{4.10} $$
Since
$$ \begin{equation} \biggl|J_0(2\pi \sqrt{Nx}\,)-\frac{2}{\pi}\, \frac{\cos(2\pi \sqrt{Nx}-\pi/4)} {(2\pi \sqrt{Nx})^{1/2}}\biggr|\leqslant \frac{C}{N^{3/4}x^{3/4}}\,, \end{equation} \tag{4.11} $$
under condition (4.2) we have
$$ \begin{equation} \biggl|\sqrt{\frac{x}{N}}\,P(N)J_1(2\pi \sqrt{Nx}\,)\biggr|\leqslant C\frac{x^{3/4}}{N^{3/4-\gamma}}\,. \end{equation} \tag{4.12} $$
Employing these estimates in (4.6)) we obtain
$$ \begin{equation} \begin{aligned} \, \nonumber \Delta(N,x)&=\frac{1}{\pi x^{1/4}}\, \frac{\cos(2\pi \sqrt{Nx}-\pi/4)}{N^{1/4}} \\ \nonumber &\qquad+r(g(x))\,\frac{x}{g(x)}\operatorname{sgn} \bigl(g(x)-x\bigr)\operatorname{si}\bigl(2\pi (\sqrt{g(x)}-\sqrt{x}\,)\sqrt{N}\,\bigr) \\ &\qquad +O\biggl(\frac{x^{3/4}}{N^{3/4-\gamma}}\biggr). \end{aligned} \end{equation} \tag{4.13} $$
Here we have used the standard notation
$$ \begin{equation*} \operatorname{si}x=-\int_x^\infty \frac{\sin t}{t}\,dt, \end{equation*} \notag $$
and $g(x)$ was defined in (4.7). Formula (4.13) describes the Gibbs phenomenon for the series (4.1) at the points $x=n$. Recall that
$$ \begin{equation} |\!\operatorname{si}x|<\frac{\pi}{2}\quad\text{and}\quad \operatorname{si}x=-\frac{\cos x}{x}+O\biggl(\frac{1}{x^2}\biggr)\quad (x \to \infty). \end{equation} \tag{4.14} $$
In view of (4.14) estimate (4.4) follows directly from (4.13). Letting $N \to \infty$ we arrive at Voronoi’s formula (4.1). That the series (4.1) converges uniformly for $x \ne n$ ($r(n) \ne 0$) follows from estimate (4.4). This proves Theorem 4.4.

Since

$$ \begin{equation} \begin{gathered} \, J_1(x)=\biggl(\frac{2}{\pi x}\biggr)^{1/2} \cos\biggl(x-\frac{3\pi}{4}\biggr)+\Delta J_1(x), \\ |\Delta J_1(x)| \leqslant \frac{3\sqrt{2}}{\sqrt{\pi}\,8}\, \frac{1}{x^{3/2}}\,, \end{gathered} \end{equation} \tag{4.15} $$
it follows from (4.3) that
$$ \begin{equation} \overline{P}(x)=-\frac{x^{1/4}}{\pi}\sum_{j=1}^N\frac{r(j)}{j^{3/4}} \cos\biggl(2\pi\sqrt{jx}+\frac{\pi}{4}\biggr)+ \Delta_N P(x), \end{equation} \tag{4.16} $$
where
$$ \begin{equation} \Delta_N P(x)\ll \frac{1}{x^{1/4}N^{1/4}}+\frac{x^{3/4}}{N^{3/4-\gamma}}+ \overline{r}(x)\min\biggl\{1,\frac{\sqrt{x}}{\delta\sqrt{N}}\biggr\}, \end{equation} \tag{4.17} $$
and for $x=n$ the third term in this formula can be dropped.

Formula (4.16) with estimate (4.17) is a variant of the truncated Voronoi formula (see the next section, § 5).

5. Truncated Voronoi formula

In this section we prove the truncated Voronoi formula — this formula is the main ingredient in the study of the quantity $P(x)$.

Theorem 5 (truncated Voronoi formula). For all $N \geqslant 3$ and $x \geqslant 3$,

$$ \begin{equation} P(x)=-\frac{x^{1/4}}{\pi}\sum_{n=1}^N \frac{r(n)}{n^{3/4}} \cos\biggl(2\pi \sqrt{nx}+\frac{\pi}{4}\biggr)+ \Delta_N P(x) \end{equation} \tag{5.1} $$
and
$$ \begin{equation} \Delta_N P(x) \ll \sqrt{\frac{x}{N}}\,\overline{r}(x)+ \overline{r}(N)\ln N, \end{equation} \tag{5.2} $$
where $\overline{r}(x)$ was defined in (4).

Corollary. For all $x \geqslant 3$,

$$ \begin{equation} |P(x)| \leqslant Cx^{1/3}\overline r^{1/3}(x). \end{equation} \tag{5.3} $$

Proof of Theorem 5.5. Consider the function $F(s)=\zeta_k(s)$ (see (16) and (11) in the introduction). This function extends analytically to the whole plane $s=\sigma+it$ as a meromorphic function with a unique pole at $s=1$. It also satisfies the functional equation
$$ \begin{equation} \begin{aligned} \, F(s)=\varphi(s)F(1-s),\qquad \varphi(s)=\pi^{2s-2}\sin\biggl(\frac{\pi s}{2}\biggr)\, \bigl(\Gamma(1-s)\bigr)^2, \end{aligned} \end{equation} \tag{5.4} $$
where $\Gamma(\,\cdot\,)$ is the gamma function. By the truncated Perron formula (A.35), (A.36)
$$ \begin{equation} \begin{gathered} \, A(x)=I(x)+R_1(x), \\ \text{where } I(x)=\frac{1}{2\pi i}\int_{b-iT}^{b+iT}F(s)x^s\,\frac{ds}{s}\qquad (b>1), \end{gathered} \end{equation} \tag{5.5} $$
and
$$ \begin{equation} R_1(x) \ll \frac{x^b}{T(b-1)}+\overline{r}(x)\biggl(\frac{x}{T}+1\biggr) =:\overline{R}_1. \end{equation} \tag{5.6} $$

We assume that

$$ \begin{equation} 1<b<C_1,\quad x>3,\quad T>b. \end{equation} \tag{5.7} $$
Consider the rectangle $D$ with vertices
$$ \begin{equation*} b-iT,\quad b+iT,\quad -a+iT,\quad\text{and}\quad -a-iT\qquad (a>b) \end{equation*} \notag $$
on the plane $s=\sigma+it$. Inside this rectangle the function
$$ \begin{equation} f(s)=F(s)\,\frac{x^s}{s} \end{equation} \tag{5.8} $$
has poles at the points $s_1=0$ and $s_2=1$; in addition,
$$ \begin{equation*} \sum_i \operatorname{res}(f(s_i))=-1+\pi x. \end{equation*} \notag $$
By the residue theorem, from (5.5) we obtain
$$ \begin{equation} \begin{aligned} \, \nonumber P(x)=A(x)-\pi x&=\frac{1}{2\pi i}\int_{-a-iT}^{-a+iT}f(s)\,ds \\ &\qquad+\frac{1}{2\pi i}\int_{-a+iT}^{b+iT}f(s)\,ds- \frac{1}{2\pi i}\int_{-a-iT}^{a-iT}f(s)\,ds+R_1. \end{aligned} \end{equation} \tag{5.9} $$
Next, $F(s)=4\zeta(s)L(s|\chi_4)$ (see (16)), and so by the Phragmén–Lindelöf theorem, for $t\gg 1$ and $s=\sigma+it$ we have
$$ \begin{equation} |\zeta(s)L(s|\chi_4)| \ll \bigl(\zeta(b)+\zeta(a+1)\bigr) t^{(2a+1)(b-\sigma)/(a+b)}. \end{equation} \tag{5.10} $$
Hence the second integral on the right-hand side of (5.9) is estimated as
$$ \begin{equation} \biggl|\frac{1}{2\pi i}\int_{-a+iT}^{b+iT}f(s)\,ds\biggr|\ll \bigl(\zeta(b)+\zeta(a+1)\bigr)\biggl(\frac{x^b}{T}+\frac{T^{2a}}{x^a}\biggr) =:\overline{R}_2. \end{equation} \tag{5.11} $$
The third integral on the right-hand side of (5.9) is estimated similarly to the second, and therefore
$$ \begin{equation} \begin{gathered} \, P(x)=\frac{1}{2\pi i}\int_{-a-iT}^{-a+iT}f(s)\,ds+ R_1+R_2, \\ R_1 \ll \overline{R}_1,\quad R_2 \ll \overline{R}_2, \end{gathered} \end{equation} \tag{5.12} $$
where the quantities $\overline{R}_1$ and $\overline{R}_2$ were defined in (5.6) and (5.11).

Consider the integral on the right-hand side of (5.12). Using formula (11) from the introduction and employing the functional equation (5.4), we find that

$$ \begin{equation} \begin{gathered} \, \frac{1}{2\pi i}\int_{-a-iT}^{-a+iT}f(s)\,ds=\sum_{n=1}^\infty r(n)I_n, \\ I_n=\frac{1}{2\pi i}\int_{-a-iT}^{-a+iT} \frac{\varphi(s)}{n^{1-s}}\,s^s\,\frac{ds}{s}\,, \end{gathered} \end{equation} \tag{5.13} $$
and therefore
$$ \begin{equation} P(x)=\sum_{n=1}^N r(n)I_n+R_1+R_2+R_3, \end{equation} \tag{5.14} $$
where
$$ \begin{equation} R_3=\sum_{n=N+1}^\infty r(n)I_n. \end{equation} \tag{5.15} $$
Let us estimate $R_3$. We write the integral $I_n$ (see (5.13)) in the form
$$ \begin{equation} \begin{aligned} \, \nonumber I_n&=\frac{1}{2\pi i}\,\frac{1}{n^{1+ax^a}}\biggl[\,\int_{-1}^{+1}+ \int_{+1}^T+\int_{-T}^{-1}\biggr]\varphi(-a+it) \frac{(xn)^{it}}{-a+it}\,dt \\ &=: I_n^{(1)}+I_n^{(2)}+I_n^{(3)}. \end{aligned} \end{equation} \tag{5.16} $$
We have $|\Gamma(x+iy)| \leqslant |\Gamma(x)|$, and so, in view of the explicit form (5.4) of the function $\varphi(s)$ we have the estimate
$$ \begin{equation} |I_n^{(1)}|\leqslant \frac{C}{n^{1+a}x^a}\,|\!\ln a|. \end{equation} \tag{5.17} $$
We estimate the integral $I_n^{(2)}$. By Stirling’s formula,
$$ \begin{equation*} \bigl(\Gamma(a+it)\bigr)^2=Ct^{1+2a}e^{-\pi t}\, e^{-2it\ln t+2it} \biggl(1+O\biggl(\frac{1}{t}\biggr)\biggr) \end{equation*} \notag $$
and therefore (see the definition (5.4) of the function $\varphi(s)$))
$$ \begin{equation} I_n^{(2)}=\frac{C}{n^{1+a}x^a}\int_1^T t^{2a}e^{iS(t)} \biggl(1+O\biggl(\frac{1}{t}\biggr)\biggr)\,dt, \end{equation} \tag{5.18} $$
where
$$ \begin{equation*} S(t)=-2t\ln t+2t+t\ln(xn). \end{equation*} \notag $$
The integral on the right-hand side of (5.18) is estimated using the formula
$$ \begin{equation} \biggl|\int_a^b f(x)e^{iS(x)}\,dx\biggr| \leqslant C\,\frac{\overline{f}}{A}\,m[(S^{(1)})^{-1}]\,m[f], \end{equation} \tag{5.19} $$
which holds for
$$ \begin{equation*} |S^{(1)}(x)| \geqslant A\quad\text{and}\quad |f(x)|\leqslant \overline{f}\qquad (x \in [a,b]); \end{equation*} \notag $$
here $m[g]$ is the number of intervals of monotonicity of the function $g$ on the interval $[a,b]$. In the case under consideration
$$ \begin{equation*} |S^{(1)}(x)| \geqslant \ln\frac{xa}{t^2}\,. \end{equation*} \notag $$
Hence, using (5.19) we find that
$$ \begin{equation} |I_n^{(2)}|\leqslant C\frac{1}{n^{1+a}x^a}\, \frac{T^{2a}}{|\!\ln(xn/T^2)|}\,. \end{equation} \tag{5.20} $$
The integral $I_n^{(3)}$ is estimated similarly to $I_n^{(2)}$. Hence (see (5.20), (5.17), and (5.16)) we have the estimate
$$ \begin{equation} |I_n| \leqslant C\biggl[\frac{|\!\ln a|}{n^{1+a}}+\frac{T^{2a}}{n^{1+a}x^a}\, \frac{1}{|\!\ln(xn/T^2)|}\biggr]. \end{equation} \tag{5.21} $$
Let $T$ satisfy
$$ \begin{equation} \frac{T^2}{x}=N+\frac{1}{2}\,. \end{equation} \tag{5.22} $$
Then from (5.21) and (5.15) we obtain
$$ \begin{equation} |R_3| \leqslant C\biggl[\frac{|\!\ln a|}{x^aaN^a}+\frac{T^{2a}}{x^aN^a} \overline{r}(N)\ln N\biggr]=:\overline{R}_3. \end{equation} \tag{5.23} $$

Let us return to (5.14). It remains to consider the integral $I_n$ (see (5.13)) for $n \leqslant N$. We have $\Gamma(1-s)=(1-s)\Gamma(-s)$, and so

$$ \begin{equation*} I_n=\frac{(-1)}{\pi^2 n}\int_{-a-iT}^{-a+iT}(\pi^{2}nx)^s \sin(\pi s)\Gamma(1-s)\Gamma(-s)\,ds. \end{equation*} \notag $$
Changing to the variable $w=1-s$ and using the formula
$$ \begin{equation*} \sin(\pi w)\Gamma(w)=\pi\,\frac{1}{\Gamma(1-w)}\,, \end{equation*} \notag $$
we find that
$$ \begin{equation} \begin{gathered} \, I_n=(-1)\,\frac{\pi x}{2}\,\frac{1}{2\pi i} \int_{2+2a-iT}^{2+2a+iT} g(s)\,ds, \\ g(s)=(\pi \sqrt{nx}\,)^s\, \frac{\Gamma(s/2-1)}{\Gamma(1-s/2)}\,. \end{gathered} \end{equation} \tag{5.24} $$
Note that
$$ \begin{equation*} \int_0^\infty x^{-1}J_1(x)x^s\,\frac{dx}{x}= -2^{s-2}\,\frac{\Gamma(s/2-1)}{\Gamma(1-s/2)}\,,\quad\text{if}\ \ 0<\sigma<\frac{5}{2}\,. \end{equation*} \notag $$
From the inversion formula for the Mellin transform we obtain
$$ \begin{equation} \frac{1}{2\pi i}\int_{\sigma_0-i\infty}^{\sigma_0+i\infty}(-1)\,2^{s-2}\, \frac{\Gamma(s/2-1)}{\Gamma(1-s/2)}\,x^{-s}\,ds= x^{-1}J_1(x) \end{equation} \tag{5.25} $$
provided that $0<\sigma_0<3/2$. Since $\sigma_0=2a+2>3/2$ in (5.24), we cannot apply (5.25) directly.

The function $g(s)$ in (5.24) is holomorphic in the rectangle $D$ with vertices

$$ \begin{equation*} s=2+2a-2iT,\quad s=2+2a+2iT,\quad s=\sigma_0+2iT,\quad s=\sigma_0-2iT, \end{equation*} \notag $$
and so, using (5.25) we find that
$$ \begin{equation} I_n=\sqrt{\frac{x}{n}}\,J_1(2\pi \sqrt{nx}\,)+\Delta I_n. \end{equation} \tag{5.26} $$
In this formula
$$ \begin{equation} \begin{aligned} \, \nonumber \Delta I_n&=\frac{\pi n}{2}\,\frac{1}{2\pi i} \biggl(\int_{2T}^\infty g(\sigma_0+it)\,dt+ \int_{-2T}^{-\infty} g(\sigma_0+it)\,dt \\ \nonumber &\qquad+\int_{\sigma_0}^{2+2a}g(\sigma+2iT)\,d\sigma- \int_{\sigma_0}^{2+2a}g(\sigma-2iT)\,d\sigma\biggr) \\ &=: \sum_{\alpha=1}^4 \Delta I_n^{(\alpha)}. \end{aligned} \end{equation} \tag{5.27} $$
From (5.14) we obtain
$$ \begin{equation} P(x)=\sum_{n=1}^N r(n)\sqrt{\frac{x}{n}}\,J_1(2\pi \sqrt{nx}\,)+ R_1+R_2+R_3+R_4. \end{equation} \tag{5.28} $$
In this formula
$$ \begin{equation} R_4=\sum_{n=1}^\infty r(n)\Delta I_n. \end{equation} \tag{5.29} $$
Let us estimate the quantity $R_4$. Note that, for each of the four integrals $\Delta I_n^{(\alpha)}$ in (5.27),
$$ \begin{equation*} \sigma>\sigma_0\quad\text{and}\quad |t|>2T. \end{equation*} \notag $$
Using Stirling’s formula, we find that in these integrals
$$ \begin{equation} g(s)=C(nx)^{-\sigma/2}\exp\{it\ln(\pi \sqrt{nx}\,)+f(s)\} \biggl(1+O\biggl(\frac{1}{s}\biggr)\biggr), \end{equation} \tag{5.30} $$
where the function $f(s)$ is defined by
$$ \begin{equation} f(s)=\begin{cases} (\sigma-2)\ln t-\pi t+i[t\ln t-t(1+\ln2\,)],& t>0, \\ (\sigma-2)\ln|t|+i[t\ln|t|-t(\ln 1+\ln2\,)],& t<0. \end{cases} \end{equation} \tag{5.31} $$
It immediately follows that
$$ \begin{equation} |\Delta I_n^{(1)}|\ll x(\pi x)^{-\sigma_0/2}\, \frac{e^{-2\pi T}}{T^{2-\sigma_0}}\,. \end{equation} \tag{5.32} $$
For $\Delta I_n^{(2)}$ we have
$$ \begin{equation*} |\Delta I_n^{(2)}|\ll \frac{x}{(nx)^{\sigma_0/2}} \biggl|\int_{2T}^\infty \frac{1}{t^{2-\sigma_0}}\, e^{iS(t)}\,dt\biggr|, \end{equation*} \notag $$
where
$$ \begin{equation*} S(t)=t\ln(\pi\sqrt{nx}\,)-t\ln t+t(1+\ln 2). \end{equation*} \notag $$
We have
$$ \begin{equation*} |S^{(1)}(t)|>C\ln\frac{T}{\sqrt{nx}}\,; \end{equation*} \notag $$
hence, using (5.19) we find that for $\sigma_0<1$
$$ \begin{equation} |\Delta I_n^{(2)}|\ll \frac{x}{(nx)^{\sigma_0/2}}\,\frac{1}{T^{2-\sigma_0}} \, \frac{1}{\ln(T/\sqrt{nx}\,)}\,. \end{equation} \tag{5.33} $$
From (5.30) and (5.31) it follows that
$$ \begin{equation} |\Delta I_n^{(3)}|\ll \frac{x}{(nx)^{\sigma_0/2}}\, T^{2a}e^{-2\pi T}. \end{equation} \tag{5.34} $$
To estimate $\Delta I_n^{(4)}$ we note that, in the whole range of $s$ (see (5.24))
$$ \begin{equation*} |g(s)| \leqslant Cn^{-\sigma/2}\,\frac{1}{|t|^{2-\sigma}}. \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} |\Delta I_n^{(4)}| \ll \frac{x}{T^2}\int_{\sigma_0}^{2+2a} (nx)^{-\sigma/2}T^\sigma\,d\sigma. \end{equation*} \notag $$
Recalling that $T$ is defined in (5.22), we have
$$ \begin{equation} |\Delta I_n^{(4)}| \ll \frac{x}{T^2}\, N^{1+a}\frac{1}{n^{1+a}}\, \frac{1}{\ln((N+1/2)/n)}\,. \end{equation} \tag{5.35} $$
The estimates for $\Delta I_n^{(1)}$ and $\Delta I_n^{(3)}$ involve the factor $e^{-2\pi T}\sim e^{-2\pi \sqrt{Nx}\,}$. Hence
$$ \begin{equation} \begin{aligned} \, \nonumber |\Delta I_n| &\ll |\Delta I_n^{(2)}|+|\Delta I_n^{(4)}| \\ &\ll \frac{x}{(nx)^{\sigma_0/2}}\,\frac{1}{T^{2-\sigma_0}}\, \frac{1}{\ln((N+1/2)/n)}+\frac{x}{T^2}\,N^{1+a} \frac{1}{\ln((N+1/2)/n)}\,\frac{1}{n^{1+a}}\,, \end{aligned} \end{equation} \tag{5.36} $$
and now it follows from (5.29) that
$$ \begin{equation} |R_4| \ll N^a(\ln N)\overline{r}(x) =: \overline{R}_4. \end{equation} \tag{5.37} $$
We set
$$ \begin{equation} a=\frac{1}{\ln N}\quad\text{and}\quad b=1+\frac{1}{\ln x}\,. \end{equation} \tag{5.38} $$
From (5.36), (5.23), and (5.12) we see that in (5.28)
$$ \begin{equation} \biggl|\,\sum_{i=1}^4 R_i\biggr| \ll \sum_{i=1}^4 \overline{R}_i \ll \sqrt{\frac{x}{N}}\,\overline{r}(x)+\overline{r}(N)\ln N. \end{equation} \tag{5.39} $$
Hence
$$ \begin{equation} P(x)=\sum_{n=1}^N r(n)\sqrt{\frac{x}{n}}\,J_1(2\pi \sqrt{nx}\,)+ \widetilde{\Delta}_N P(x), \end{equation} \tag{5.40} $$
where
$$ \begin{equation} |\widetilde{\Delta}_N P(x)| \ll \sqrt{\frac{x}{N}}\,\overline{r}(x)+ \overline{r}(N)\ln N. \end{equation} \tag{5.41} $$
Now, it suffices to use (4.15) to complete the proof. Theorem 5.5 is progved.

Let us verify the corollary after Theorem 5.5 (that is, estimate (5.3)). According to (5.1),

$$ \begin{equation*} |P(x)| \leqslant Cx^{1/4}\sum_{n \leqslant N}\frac{r(n)}{n^{3/4}}+ |\Delta_N P(x)|. \end{equation*} \notag $$
We have
$$ \begin{equation*} \sum_{n \leqslant N}\frac{r(n)}{n^{3/4}} \leqslant CN^{1/4}. \end{equation*} \notag $$
Now, if $N$ satisfies $N^{1/4} x^{1/4}=\sqrt{x/N}\,\overline{r}(x)$, that is,
$$ \begin{equation*} N=x^{1/3}\overline{r}^{4/3}(x), \end{equation*} \notag $$
then we arrive at the required result, which proves the corollary.

In view of (5.3) we can put $\gamma=1/3+\varepsilon$ in (4.2). Now estimates (4.4) assume the form

$$ \begin{equation} \Delta(N,x) \ll \frac{x^{3/4}}{N^{5/2-\varepsilon}}+\overline{r}(x) \min\biggl\{1,\frac{\sqrt{x}}{\delta\sqrt{N}}\biggr\}. \end{equation} \tag{5.42} $$

Estimate (5.2) is stronger than (5.42) (except in the case when $x=n$ and $N>x^{1/2}$).

Comments to Chapter I

1. The regularized Voronoi formula (1.2) was proved in [15].

2. A proof of the multivariate Poisson formula can be found in [19].

3. For the required results on Bessel functions, see [1] and [20].

4. For Perron’s formula, we refer to [21] and [22].

5. The spectral interpretation of the regularized Voronoi formula and its relations to Selberg’s formula were considered in [23].

6. Our proof of Landau’s formula (2.3) follows [1]. For a different proof, see [5].

The Landau–Hardy identity was proved in [24]. Our presentation in § 3 follows [1] and [5].

7. Our proof of Voronoi’s formula in § 4 follows the approach of [1] and [5].

8. The required results from the theory of Dirichlet $L$-functions, including the derivation of the functional equation (5.4), can be found in [4], [25], and [26].

9. Our proof of the truncated Voronoi formula (5.1) follows the scheme of the proof of the truncated Voronoi formula for $\Delta(x)$ in [2] and [3].

10. The truncated Perron formula has extensively been studied (see, for example, [4], [25], and [26]). A variant of this formula suitable for our purposes is proved in Appendix A.

11. For the required properties of special functions, see the books [27]–[29]; for the evaluation of various integrals, see [29].

Chapter II. Behaviour of $P(x)$ on long intervals

Recall that an interval $I \subset [0,2T]$ ($T\gg 1$) is said to be long if its length $|I|$ satisfies $|I|>CT$. In this chapter we study some quantities characterizing the behaviour of the quantity $P(x)$ on long intervals.

6. Moments

The moments $M_k(T)$ of the quantity $P(x)$ are determined by

$$ \begin{equation} M_k(T)=\int_0^T P^k(x)\,dx,\qquad k \geqslant 1. \end{equation} \tag{6.1} $$
The first moment $M_1(T)$ was already considered in § 2. According to (2.3),
$$ \begin{equation*} M_1(T)=P_1(T)=\frac{T}{\pi}\sum_{n=1}^\infty \frac{r(n)}{n}\, J_2(2\pi \sqrt{nT}\,). \end{equation*} \notag $$
Using (4.8) we find that
$$ \begin{equation} M_1(T)=\frac{T^{3/4}}{\pi^2}\sum_{n=1}^\infty \frac{r(n)}{n^{5/4}} \cos\biggl(2\pi \sqrt{nT}-\frac{\pi}{4}\biggr)+O(T^{-1/4}), \end{equation} \tag{6.2} $$
and therefore
$$ \begin{equation*} \begin{gathered} \, \frac{1}{T}\int_0^T P(x)\,dx=\frac{1}{T}\,M_1(T) \leqslant CT^{-1/4}, \\ C=\frac{1}{\pi^2}\sum_{n=1}^\infty \frac{r(n)}{n^{5/4}}\,. \end{gathered} \end{equation*} \notag $$
So the quantity $P(x)$ changes sign quite often on the interval $[0,T]$.

Consider the second moment $M_2(T)$.

Theorem 6. The equality

$$ \begin{equation} M_2(T)=\int_0^T P^2(x)\,dx=BT^{3/2}+\Delta M_2(T) \end{equation} \tag{6.3} $$
holds, where
$$ \begin{equation} \Delta M_2(T)=O(T(\ln T)^2)\quad\textit{and}\quad B=\frac{1}{3\pi^2}\sum_{n=1}^\infty \frac{r^2(n)}{n^{3/2}}= 1.69396\ldots\,. \end{equation} \tag{6.4} $$

Corollary. The equality

$$ \begin{equation} P(x)=\Omega(x^{1/4}) \end{equation} \tag{6.5} $$
holds.

This equality means that there is a sequence $x_n \to \infty$ such that $|P(x_n)|>Cx_n^{1/4}$ (see § 7).

Proof of Theorem 6.6. We consider the quantity
$$ \begin{equation} P_N(x)=-\frac{x^{1/4}}{\pi}\sum_{j=1}^N \frac{r(j)}{j^{3/4}} \cos\biggl(2\pi\sqrt{jx}+\frac{\pi}{4}\biggr) \end{equation} \tag{6.6} $$
and write equality (4.16) as
$$ \begin{equation} P(x)=P_N(x)+\Delta_N P(x), \end{equation} \tag{6.7} $$
where the quantity $\Delta_N P(x)$ was estimated in (4.17).

Thus, we have

$$ \begin{equation} M_2(T)=\int_0^T P_N^2(x)\,dx+\Delta_1 M_2(T). \end{equation} \tag{6.8} $$
In this formula, by Cauchy’s inequality
$$ \begin{equation} \Delta_1 M_2(T) \ll \biggl[\,\int_0^T P_N^2(x)\,dx\biggr]^{1/2} \biggl[\,\int_0^T(\Delta_N P(x))^2\,dx\biggr]^{1/2}+\int_0^T(\Delta_N P(x))^2\,dx. \end{equation} \tag{6.9} $$

Let us estimate the quantity $\displaystyle\int_0^T(\Delta_N P(x))^2\,dx$. From (4.17) it follows that

$$ \begin{equation} \int_0^T(\Delta_N P(x))^2\,dx \ll T^{1/2}+\frac{T^{3/2}}{N^{3/2-2\gamma}}+ \int_0^T \overline{r}^2(x) \biggl[\min\biggl\{1,\frac{\sqrt{x}}{\|x\|\sqrt{N}}\biggr\}\biggr]^2\,dx. \end{equation} \tag{6.10} $$
We have
$$ \begin{equation*} \begin{aligned} \, &\int_0^T \overline{r}^2(x) \biggl[\min\biggl\{1,\frac{\sqrt{x}}{\|x\|\sqrt{N}}\biggr\}\biggr]^2\,dx \\ &\qquad\ll \sum_{n=0}^N\biggl[\,\int_{|x-n|<\delta_n} \overline{r}^2(x)\,dx+ \int_{|x-n|>\delta_n} \overline{r}^2(x)\,\frac{x}{\delta_n^2N}\,dx\biggr], \end{aligned} \end{equation*} \notag $$
and so, choosing $\delta_n$ to satisfy
$$ \begin{equation*} \delta_n^2N=N^{1/3}, \end{equation*} \notag $$
we find that
$$ \begin{equation*} \int_0^T \overline{r}^2(x) \biggl[\min\biggl\{1,\frac{\sqrt{x}}{\|x\|\sqrt{N}}\biggr\}\biggr]^2\,dx\ll \frac{T^2\overline{r}^2(T)}{N^{1/3}}\,. \end{equation*} \notag $$
In view of (5.3) we can take $\gamma=1/3+\varepsilon$ in (6.10), and therefore we have the estimate
$$ \begin{equation*} \int_0^T(\Delta_N P(x))^2\,dx \ll T^{1/2}+\frac{T^{5/2}}{N^{5/6}}+ \frac{T^2\overline{r}^2(N)}{N^{1/3}}\,. \end{equation*} \notag $$
Choosing $N=T^a$, where $a<C$ is sufficiently large, we have
$$ \begin{equation} \int_0^T(\Delta_N P(x))^2\,dx \ll T^{1/2},\qquad N=T^a,\quad a \gg 1. \end{equation} \tag{6.11} $$
We write the integral $\displaystyle\int_0^T P_N^2(x)\,dx$ in the form
$$ \begin{equation} \int_0^T P^2_N(x)\,dx=\frac{2T}{\pi^2}\int_0^{\sqrt{T}}F_N(t)\,dt- \frac{4}{\pi^2}\int_0^T s\biggl[\,\int_0^sF_N(t)\,dt\biggr]\,ds. \end{equation} \tag{6.12} $$

In this formula

$$ \begin{equation*} \begin{aligned} \, F_N(t)&=\biggl[\,\sum_{n=1}^N \frac{r(n)}{n^{3/4}} \cos\biggl(2\pi \sqrt{n}\,t+\frac{\pi}{4}\biggr)\biggr]^2 \\ &=\biggl[\,\sum_{n=1}^N \alpha_n^+ e^{i\lambda_n^+ t}+ \sum_{n=1}^N \alpha_n^- e^{-i\lambda_n^- t}\biggr]^2, \end{aligned} \end{equation*} \notag $$
and $\lambda_n^\pm$ and $\alpha_n^\pm$ are given by
$$ \begin{equation*} \lambda_n^\pm=\pm 2\pi\sqrt{n}\quad\text{and}\quad \alpha_n^\pm=\frac{r(n)}{n^{3/4}}\,\frac{1}{2\sqrt2}(1+i). \end{equation*} \notag $$
To estimate the integral $\displaystyle\int_0^s F_N(s)\,ds$ we use the Montgomery–Vaughan theorem, according to which
$$ \begin{equation} \begin{gathered} \, \int_0^X\biggl[\,\sum_{n=1}^Na_n e^{i\lambda_n t}\biggr]^2\,dt= \sum_{n=1}^N |a_n|^2X+\Delta_X, \\ |\Delta_X| \leqslant 3\pi\sum_{n=1}^N\frac{|a_n|^2}{\delta_n}\,,\qquad \delta_n=\min_{(n,m)\colon n \ne m}|\lambda_n-\lambda_m|. \end{gathered} \end{equation} \tag{6.13} $$
In the case we are interested in,
$$ \begin{equation*} a_n=\alpha_n^+,\alpha_n^-,\qquad \lambda_n=\lambda_n^+,\lambda_n^-, \end{equation*} \notag $$
and
$$ \begin{equation*} |\lambda_n-\lambda_m|>\frac{C}{\sqrt{n}}\,. \end{equation*} \notag $$
Since
$$ \begin{equation} \sum_{n=1}^N \frac{r^2(n)}{n} \leqslant C(\ln N)^2, \end{equation} \tag{6.14} $$
we have the equality
$$ \begin{equation} \begin{gathered} \, \int_0^s F_N(t)\,dt=sA_N+O\bigl((\ln N)^2\bigr), \\ A_N=\frac{1}{2}\sum_{n=1}^N \frac{r^2(n)}{n^{3/2}}\,, \end{gathered} \end{equation} \tag{6.15} $$
which implies that
$$ \begin{equation} \begin{gathered} \, \int_0^{\sqrt{T}}\biggl(\int_0^s F_N(t)\,dt\biggr)\,ds= \frac{A_N}{3}\,T^{3/2}+O\bigl(T(\ln N)^2\bigr), \\ N=T^a, \qquad 1\ll a\leqslant C,\quad C \gg 1. \end{gathered} \end{equation} \tag{6.16} $$
Using (6.12)) we have
$$ \begin{equation} \begin{gathered} \, \int_0^T P_N^2(x)\,dx=\frac{2}{3}\,T^{3/2}A_N+O\bigl(T(\ln T)^2\bigr), \\ A_N=\frac{1}{2}\sum_{n=1}^\infty\frac{r^2(n)}{n^{3/2}}+O(N^{-1/2}). \end{gathered} \end{equation} \tag{6.17} $$
The estimate
$$ \begin{equation} \Delta M_2(T)=O\bigl(T(\ln T)^2\bigr) \end{equation} \tag{6.18} $$
follows from (6.9), (6.11), and (6.17). This proves Theorem 6.6.

Using the truncated Voronoi formula (5.1), (5.2), it can easily be shown that

$$ \begin{equation} \int_0^T P^2(x)\,dx=BT^{3/2}+O(T^{5/4+\varepsilon}). \end{equation} \tag{6.19} $$
The above proof of Theorem 6.6 is the only place in this paper, where the truncated Voronoi formula is used in the form (4.16), (4.17). All other results are based on Theorem 5.5.

In addition to $M_2(T)$, exact formulae are available for $M_k(T)$, $k=3,4,5$. Namely, it is known that that

$$ \begin{equation} M_3(T) =-A_3T^{7/4}+O(T^{3/2+\varepsilon}), \quad A_3>0, \end{equation} \tag{6.20} $$
$$ \begin{equation} M_4(T) =A_4T^2+O(T^{2-2/41+\varepsilon}), \quad A_4>0, \end{equation} \tag{6.21} $$
and
$$ \begin{equation} M_5(T)=-A_5T^{5/4}+O(T^{2-3/28+\varepsilon}),\quad A_5>0, \end{equation} \tag{6.22} $$
and the precise values of the constants $A_k$, $k=3,4,5$, are also known. Note that odd moments are negative. Regarding the absolute moments
$$ \begin{equation} \overline{M}_k(T)=\int_0^T |P(x)|^k\,dx, \end{equation} \tag{6.23} $$
it is known that
$$ \begin{equation} \overline{M}_k(T) \ll T^{1+k/4+\varepsilon},\quad k \leqslant 9, \end{equation} \tag{6.24} $$
and
$$ \begin{equation} \overline{M}_k(T) \ll T^{(35k+38+\varepsilon)/108}, \quad k \geqslant \frac{35}{4}\,. \end{equation} \tag{6.25} $$
However, these results were obtained using the non-trivial bounds
$$ \begin{equation} |P(x)| \ll x^{35/108+\varepsilon}\quad\text{and}\quad |P(x)| \ll x^{7/22+\varepsilon}. \end{equation} \tag{6.26} $$
Consider the relation between estimates for the absolute moments $\overline{M}_k(T)$ for large $k$ with those for the quantities $|P(x)|$ we are interested in. Note that if
$$ \begin{equation} |P(x_0)| \gg T^{1/4+\sigma}\qquad (x_0 \in [0,T]), \end{equation} \tag{6.27} $$
then
$$ \begin{equation} |P(x)| \gg T^{1/4+\sigma}\quad\text{for}\ \ |x-x_0| < \Delta(x_0),\ \ \Delta(x_0)=T^{1/4+\sigma-\varepsilon}, \end{equation} \tag{6.28} $$
and therefore
$$ \begin{equation} \overline{M}_k(T) \gg \int_{|x-x_0|<\Delta(x_0)}|P(x)|^k\,dx \gg T^{(1/4+\sigma)(k+1)-\varepsilon}. \end{equation} \tag{6.29} $$
Now the estimate
$$ \begin{equation} \overline{M}_k(T) \ll T^{1+k/4+\varepsilon} \end{equation} \tag{6.30} $$
implies that
$$ \begin{equation} |P(x)| \ll T^{1/4+\sigma},\qquad \sigma \leqslant \frac{3}{4}\,\frac{1}{k+1}+\varepsilon. \end{equation} \tag{6.31} $$
This estimate is non-trivial if
$$ \begin{equation} \frac{3}{4}\,\frac{1}{k+1}<\frac{1}{12},\quad\text{that is}, \quad k>8. \end{equation} \tag{6.32} $$
It follows from (6.31) that to solve the circle problem it suffices to prove estimate (6.30) for arbitrarily large $k$.

7. $\Omega$-estimates

Recall that the Hardy symbols $\Omega$ and $\Omega_\pm$ are defined by:

$$ \begin{equation} \begin{aligned} \, f(x)=\Omega(g(x))\quad &\Longleftrightarrow \quad \limsup_{x \to \infty} \frac{|f(x)|}{g(x)}>0, \\ f(x)=\Omega_+(g(x))\quad &\Longleftrightarrow \quad \limsup_{x \to \infty} \frac{f(x)}{g(x)}>0, \\ f(x)=\Omega_-(g(x))\quad &\Longleftrightarrow \quad \liminf_{x \to \infty} \frac{f(x)}{g(x)}<0. \end{aligned} \end{equation} \tag{7.1} $$
We need some known results. In 1916 Hardy proved that
$$ \begin{equation} P(x)=\begin{cases} \Omega_-\bigl((x\ln x)^{1/4}\ln_2 x\bigr), \\ \Omega_+(x^{1/4}). \end{cases} \end{equation} \tag{7.2} $$
In 1961 the estimate
$$ \begin{equation} P(x)=\Omega_+\bigl(x^{1/4}(\ln_2 x)^{1/4}(\ln_3 x)^{1/4}\bigr), \end{equation} \tag{7.3} $$
was proved, and in 2003 the estimate
$$ \begin{equation} P(x)=\Omega\bigl((x\ln x)^{1/4}(\ln_2 x)^{3(2^{1/3}-1)/4} (\ln_3 x)^{-5/4}\bigr) \end{equation} \tag{7.4} $$
was established. However, it is unknown whether the estimate
$$ \begin{equation*} P(x)=\Omega_+\bigl(x(\ln x)^\lambda\bigr),\qquad \lambda>0, \end{equation*} \notag $$
holds. The following theorem is proved in this section.

Theorem 7. The following estimates hold:

$$ \begin{equation} P(x) =\Omega_-\bigl(x^{1/4}(\ln x)^{1/4}\bigr), \end{equation} \tag{7.5} $$
$$ \begin{equation} P(x) =\Omega\bigl(x^{1/4}(\ln x)^{1/4}\bigr). \end{equation} \tag{7.6} $$

Proof. Estimate (7.6) follows from (7.5). To prove (7.5) we must construct a sequence $x_N$ ($x_N \to \infty$ as $N \to \infty$) such that
$$ \begin{equation} P(x_N) \leqslant -Cx_N^{1/4}(\ln x_N)^{1/4},\qquad C>0,\quad x_N>\overline{x}. \end{equation} \tag{7.7} $$
From Voronoi’s formula (4.1) it follows that
$$ \begin{equation} P(x)=-\frac{x^{1/4}}{\pi}\sum_{n=1}^\infty \frac{r(n)}{n^{3/4}}\cos\biggl(2\pi \sqrt{nx}+ \frac{\pi}{4}\biggr)+O(x^{-1/4}). \end{equation} \tag{7.8} $$

Hardy proved that the series on the right-hand side of (7.8) converges boundedly for $x < C$ (for any $C$).

Consider the function

$$ \begin{equation} F(s)=\sum_{n=1}^\infty\frac{r(n)}{n^{3/4}} \exp\biggl\{-2\pi i\sqrt{n}\,s+\frac{\pi}{4}\,i\biggr\},\qquad s=\sigma+it. \end{equation} \tag{7.9} $$
By the above result of Hardy,
$$ \begin{equation} P(x)=\lim_{\sigma \to \infty}\biggl(-\frac{x^{1/4}}{\pi} \operatorname{Re}F(s)\biggr)+O(x^{-1/4}), \end{equation} \tag{7.10} $$
and so it suffices to show that there exist sequences $t_N$ and $\sigma_N$ ($t_N \to \infty$ and $\sigma_N \to \infty$ as $N \to \infty$) such that
$$ \begin{equation} \operatorname{Re}F(s_N)\geqslant C(\ln t_N)^{1/4},\qquad s_N=\sigma_N+it_N. \end{equation} \tag{7.11} $$
We write $\operatorname{Re}F(s)$ as
$$ \begin{equation} \operatorname{Re}F(s)=S_1(s)+S_2(s), \end{equation} \tag{7.12} $$
where
$$ \begin{equation} \begin{aligned} \, S_1(s)&=\sum_{n \leqslant N}\frac{r(n)}{n^{3/4}}\, e^{-2\pi\sigma\sqrt{n}}\cos\biggl(2\pi \sqrt{n}\,t+ \frac{\pi }{4}\biggr), \\ S_2(s)&=\sum_{n > N}\frac{r(n)}{n^{3/4}}\, e^{-2\pi\sigma\sqrt{n}}\cos\biggl(2\pi \sqrt{n}\,t+ \frac{\pi }{4}\biggr). \end{aligned} \end{equation} \tag{7.13} $$
To estimate the sum $S_2(s)$ we employ Abel’s formula, which asserts that if
$$ \begin{equation*} \biggl(\sum_{a<n<\xi}a_n\biggr)\bigg/f(\xi)\to 0 \quad\text{as}\quad \xi \to \infty, \end{equation*} \notag $$
then
$$ \begin{equation} \sum_{n>a}a_n f(n)=-\int_a^\infty\biggl(\,\sum_{n<\xi}a_n\biggr)\, \frac{d}{d\xi}\,f(\xi)\,d\xi. \end{equation} \tag{7.14} $$
Here it is assumed that the function $f$ is continuously differentiable. Since
$$ \begin{equation} \sum_{n \leqslant \xi} r(n) \ll \xi, \end{equation} \tag{7.15} $$
we have the estimate
$$ \begin{equation} |S_2(s)|\ll \frac{1}{\sigma^{1/2}}(\sigma\sqrt{N}\,)^{-1/2} e^{-2\pi \sigma\sqrt{N}}. \end{equation} \tag{7.16} $$

If

$$ \begin{equation} \sigma=\sigma_N\quad\text{and}\quad 2\pi\sigma\sqrt{N}=H\quad (H \gg 1), \end{equation} \tag{7.17} $$
then
$$ \begin{equation} |S_2(s)| \ll \frac{1}{(\sigma_N)^{1/2}}\, H^{-1/2}e^{-H}. \end{equation} \tag{7.18} $$
We estimate the quantity $S_1(s_N)$ from below. The required estimate is obtained via Dirichlet’s theorem on Diophantine approximations. According to this theorem, for any $\eta < 1$ and any $N> 1$ there exists $t=t_N$ such that
$$ \begin{equation} \begin{gathered} \, 1< t_N < \biggl(\frac{1}{\eta}\biggr)^N, \\ \{t_N\sqrt{n}\,\}<\eta \qquad (n=1,\dots,N), \end{gathered} \end{equation} \tag{7.19} $$
where $\{x\}$ is the fractional part of $x$. Let $\eta=1/16$. From (7.19) we obtain
$$ \begin{equation} \cos\biggl(2\pi \sqrt{n}\,t_N+\frac{\pi}{4}\biggr)>C\qquad (n<N), \end{equation} \tag{7.20} $$
and thus, by Abel’s formula (3.4) we have
$$ \begin{equation} S_2(s_N) \geqslant C\sum_{n \geqslant 1}^N \frac{r(n)}{n^{3/4}}\, e^{-2\pi\sigma_N\sqrt{n}}\geqslant \frac{C}{\sigma_N^{1/2}}\,. \end{equation} \tag{7.21} $$
If $H \leqslant C$ is sufficiently large, then from (7.12), (7.21), and (7.16) we obtain
$$ \begin{equation} \operatorname{Re}F(s_N) \geqslant \frac{C}{\sigma_N^{1/2}}= C\frac{1}{\sqrt{H}}\,N^{1/4}. \end{equation} \tag{7.22} $$

By (7.19),

$$ \begin{equation} \ln t_N<N\biggl|\ln\frac{1}{\eta}\biggr|,\quad\text{that is}, \quad N \geqslant C\ln t_N, \end{equation} \tag{7.23} $$
which establishes (7.11) (see (7.22) and (7.11)). This proves Theorem 7.7.

8. Sign changes with exits from the barrier

In this section we prove the following theorem.

Theorem 8. The function $P(x)\pm f(x)$, where $f(x)=ax^{1/4}$ for $x \geqslant x_0$ and $a>0$, changes sign at least once on any interval $[x,x + \Delta x]$ for $\Delta x \geqslant 2b\sqrt{x}$ for all $a$ and $b$ such that

$$ \begin{equation} 4a+\frac{S}{\pi^2 b^2}<\frac{2}{\pi}\,, \end{equation} \tag{8.1} $$
where
$$ \begin{equation} S=\sum_{n=2}^\infty \frac{r(n)}{n^{3/4}}\biggl(\frac{1}{n}+ \frac{1}{2}\,\frac{1}{(\sqrt{n}-1)^2}+ \frac{1}{2}\,\frac{1}{(\sqrt{n}+1)^2}\biggr). \end{equation} \tag{8.2} $$

Note that since $S=13.02876\ldots$ , condition (8.1) is met, for example, for

$$ \begin{equation*} a=0.1\quad\text{and}\quad b=2.4. \end{equation*} \notag $$

Proof. Consider the quantity
$$ \begin{equation} F(t)=\frac{1}{\sqrt{t}}\bigl(P(t^2)+f(t^2)\bigr) \end{equation} \tag{8.3} $$
and the integral
$$ \begin{equation} I(t)=\int_{-1}^1 F(t+au)K_\tau(u)\,du, \end{equation} \tag{8.4} $$
in which
$$ \begin{equation} K_\tau(u)=(1-|u|)(1\tau\sin 2\pi bu) \quad (\tau=\pm 1). \end{equation} \tag{8.5} $$
Since
$$ \begin{equation*} F(t+au)=\frac{P((t+bu)^2)}{\sqrt{t+bu}}\pm a, \end{equation*} \notag $$
we have
$$ \begin{equation} \begin{gathered} \, I(t)=I_P(t)+\Delta I, \\ |\Delta I| \leqslant 4a, \end{gathered} \end{equation} \tag{8.6} $$
and the quantity $I_P(t)$ is defined by
$$ \begin{equation} I_P(t)=\int_{-1}^1\frac{P((t+bu)^2)}{\sqrt{t+bu}}\,K_\tau(u)\,du. \end{equation} \tag{8.7} $$
Now we use the truncated Voronoi formula (5.1), in which
$$ \begin{equation*} \Delta_N P(x) \ll \frac{x^{1/2+\varepsilon}}{\sqrt{N}}+N^\varepsilon, \end{equation*} \notag $$
and
$$ \begin{equation} N=[t^2]=[x] \qquad([x] \text{ is the integer part of $x$)}. \end{equation} \tag{8.8} $$
From (5.1) it follows that
$$ \begin{equation} \begin{gathered} \, I_P(t)=-\frac{1}{\pi}\sum_{j=1}^N\frac{r(j)}{j^{3/4}}\, I_j(t)+\Delta I_P, \\ |\Delta I_P| \ll \frac{t^\varepsilon}{\sqrt{t}}\,, \end{gathered} \end{equation} \tag{8.9} $$
where
$$ \begin{equation} I_j(t)=\int_{-1}^1 \cos\biggl(2\pi \sqrt{j}\,\Bigl(t+bu+\frac{pi}{4}\Bigr)\biggr) (1-|u|)\bigl(1\pm \sin(2\pi bu)\bigr)\,du. \end{equation} \tag{8.10} $$
The integral $I_j(t)$ can be evaluated explicitly. Namely, for $j=1$,
$$ \begin{equation} \begin{gathered} \, I_1(t)=-\frac{\tau}{2}\sin\biggl(2\pi t+\frac{\pi}{4}\biggr)+\Delta I_1, \\ |\Delta I_1| \leqslant \frac{1}{\pi^2 a^2}\,\frac{9}{8}\,, \end{gathered} \end{equation} \tag{8.11} $$
and for $j \geqslant 2$
$$ \begin{equation} |I_j(t)| \leqslant \frac{1}{\pi^2 b^2}\biggl(\frac{1}{j}+ \frac{1}{2(\sqrt{j}-1)^2}+\frac{1}{2(\sqrt{j}+1)^2}\biggr). \end{equation} \tag{8.12} $$
We have $r(1)=4$, and therefore from (8.6), (8.9), (8.11), and (8.12) it follows that
$$ \begin{equation} \begin{gathered} \, I(t)=I_0(t)+\Delta I, \\ I_0(t)=\frac{2}{\pi}\,\tau\sin\biggl(2\pi t+\frac{\pi}{4}\biggr). \end{gathered} \end{equation} \tag{8.13} $$
For $\Delta I$ we have the estimate
$$ \begin{equation} \Delta I \leqslant 4a+\frac{Ct^\varepsilon}{\sqrt{t}}+ \frac{1}{\pi^2 b^2}\,S, \end{equation} \tag{8.14} $$
where $S$ was defined in (8.2). By condition (8.1), we have
$$ \begin{equation} \begin{gathered} \, I(t)=\frac{2}{\pi}\biggl(\tau\sin\biggl(2\pi t+ \frac{\pi}{4}\biggr)+\varphi(t)\biggr), \\ |\varphi(t)|<1. \end{gathered} \end{equation} \tag{8.15} $$
Thus, the integral $I(t)$ (see (8.4)) changes sign on any interval of length $\Delta t>1/2$. Now the required result holds, since $K_\tau(u) \geqslant 0$ and $x=t^2$. This completes the proof of Theorem 8.8.

9. Distribution of values of the quantity $P(x)x^{-1/4}$

Given a set $A \subset \mathbb{R}_+$, the relative measure of $A$ is defined by

$$ \begin{equation} \mu_{\rm R}(A)=\lim_{T \to \infty}\frac{1}{T}\,\mu\{(0,T)\cap A\} \end{equation} \tag{9.1} $$
(assuming that the limit exists), where $\mu\{U\}$ is the Lebesgue measure of the set $U$. The relative measure is not countably additive, and so we cannot apply to it probabilistic results on almost everywhere convergence.

Theorem 9. The function $P(x)x^{-1/4}$ has the distribution function

$$ \begin{equation} D(s)=\int_{-\infty}^s\rho(\xi)\,d\xi \end{equation} \tag{9.2} $$
with density $\rho(\xi)$ with respect to the measure (9.1). This means that
$$ \begin{equation} \lim_{T\to\infty}\frac{1}{T}\,\mu\bigl\{x \in [1,T]\colon P(x)x^{-1/4}\in [a,b]\bigr\}=\int_a^b\rho(\xi)\,d\xi. \end{equation} \tag{9.3} $$

We do not provide a proof of this theorem (see the comments at the end of the chapter). This is because equality (9.3) does not contain any information about the local behaviour of $P(x)$, since the left-hand side of this equality does not change for $P(x)$ replaced by $P(x)+f(x)$, where

$$ \begin{equation*} f(x)=\begin{cases} x_0^{1/4+\beta}, & x \in [x_0,x_0+\Delta x], \\ 0, & x \notin [x_0,x_0+\Delta x]. \end{cases} \end{equation*} \notag $$
There also is a ‘local’ version of Theorem 9.9, which reads
$$ \begin{equation} \begin{gathered} \, \lim_{T \to \infty}\frac{1}{\Delta T}\,\mu\bigl\{x \in [T,T+\Delta T]\colon P(x)x^{-1/4}\in [a,b]\bigr\}=\int_a^b \rho(\xi)\,d\xi \\ (\Delta T>CT^{1/2+\varepsilon}). \end{gathered} \end{equation} \tag{9.4} $$

Let us recall some known properties of the functions $\rho(\xi)$ and $D(s)$.

The function $\rho(\xi)$ has a unique maximum at $\xi_0\approx 0.341\ldots$ , and $\rho(\xi_0) \approx 0.25$. In addition, the following estimate holds:

$$ \begin{equation} \rho(\xi) \ll \exp\{-|\xi|^{4-\varepsilon}\}\qquad (|\xi| \to \infty). \end{equation} \tag{9.5} $$
If
$$ \begin{equation} \widetilde{D}(s):=\begin{cases} D(s), & s>0, \\ 1-D(s), & s<0, \end{cases} \end{equation} \tag{9.6} $$
then the following two-sided estimate holds:
$$ \begin{equation} \begin{gathered} \, \exp\biggl\{-C_1\frac{|s|^4}{(\ln|s|)^\beta}\biggr\} \ll \widetilde{D}(s) \ll \exp\biggl\{-C_2\frac{|s|^4}{(\ln|s|)^\beta}\biggr\}, \\ \beta=3(2^{3/4}-1)=4.5595\ldots\,. \end{gathered} \end{equation} \tag{9.7} $$

Comments to Chapter II

1. Our proof of Theorem 6.6 follows mainly [30], which gives the stronger estimate

$$ \begin{equation*} \Delta M_2(T)=O\bigl(T(\ln T)^{3/2}\ln_2 T\bigr). \end{equation*} \notag $$
This estimate was improved in [31] as follows:
$$ \begin{equation*} \Delta M_2(T)=O(T\ln T \ln_2 T). \end{equation*} \notag $$
Equality (6.13) was established in [32].

2. Estimates for higher moments were considered in [33] and [34]. The above formulae for $M_k(T)$, $k=3,4,5$, are taken from [34].

Estimates (6.24) and (6.25) for absolute moments were obtained in [35] and [36].

3. The first $\Omega$-estimates were obtained in [37], where (7.2) was proved (also see [38]). Estimate (7.3) was verified in [39], and estimate (7.4) in [40]. Our proof of Theorem 7.7 follows [1].

4. The proof of Theorem 8.8 (actually, a less precise version of this result) was given in [41]. Our proof follows the method of that paper. Another approach to this problem was proposed by Landau (see [1]).

5. For applications of relative measure in number theory, see [42]. Theorem 9.9 was proved in [36], and equality (9.4) in [43]. For a proof of estimate (9.5), see [44]. The two-sided estimate (9.7) was proved in [45].

Chapter III. Behaviour of $P(x)$ on short intervals

This chapter discusses estimates for various quantities, which characterize the local behaviour of the quantity $P(x)$. It what follows it is assumed that $T \leqslant x \leqslant T+H$ and $H \leqslant T/2$.

10. Local means of the quantities $P^k(x)$

In this section we estimate the quantities

$$ \begin{equation} E_k(T,H)=\frac{1}{2H}\int_{T-H}^{T+H}P^k(x)\,dx. \end{equation} \tag{10.1} $$

Theorem 10. I. The following estimate holds:

$$ \begin{equation} E_1(T,H)\leqslant \begin{cases} C_1\sqrt{\dfrac{T}{H}} &\textit{if}\ H\leqslant C\sqrt{T}\,, \vphantom{\Biggr\}} \\ C_2\dfrac{T^{3/4}}{H} &\textit{if}\ H \leqslant \dfrac{1}{2}T. \end{cases} \end{equation} \tag{10.2} $$

II. The following estimate holds:

$$ \begin{equation} E_2(T,H) \leqslant C\biggl(\sqrt{T}+\frac{T(\ln T)^2}{H}\biggr)\quad \textit{if}\ \ H \leqslant \frac{1}{2}\,T. \end{equation} \tag{10.3} $$

III. The following estimate holds:

$$ \begin{equation} E_4(T,H) \ll T^{1+\varepsilon}+\frac{T^{5/3+\varepsilon}}{H} \quad \textit{if}\ \ T^{1/2} \leqslant H \leqslant \frac{1}{2}\,T. \end{equation} \tag{10.4} $$

IV. The following estimate holds:

$$ \begin{equation} E_6(T,H) \ll \begin{cases} \dfrac{T^{2+\varepsilon}}{\sqrt{H}}&\textit{ if}\ H \geqslant T^{2/3}, \\ T^{3/2+\varepsilon}+\dfrac{T^{7/3+\varepsilon}}{H}&\textit{ if}\ T^{1/2}\leqslant H \leqslant T^{2/3}. \end{cases} \end{equation} \tag{10.5} $$

Proof. I. Let us verify assertion I. Since
$$ \begin{equation*} E_1(T,H)=\frac{1}{2H}\int_0^{T+H}P(x)\,dx-\frac{1}{2H}\int_0^{T-H}P(x)\,dx, \end{equation*} \notag $$
by Landau’s formula (2.3) we have
$$ \begin{equation*} \begin{aligned} \, E_1(T,H)&=\frac{T^{3/4}}{2H\pi^2}\sum_{j=1}^\infty\frac{r(j)}{j^{5/4}} \bigl[\cos\bigl(2\pi \sqrt{j(T-H)}\,\bigr)- \cos \bigl(2\pi \sqrt{j(T+H)}\,\bigr)\bigr] \\ &\qquad+O\biggl(\frac{T^{1/4}}{H}+\frac{1}{T^{1/4}}\biggr). \end{aligned} \end{equation*} \notag $$
Hence
$$ \begin{equation} \begin{aligned} \, \nonumber |E_1(T,H)|&\leqslant \frac{T^{3/4}}{H\pi^2}\sum_{j=1}^\infty \frac{r(j)}{j^{5/4}}\,\bigl|\sin\bigl(2\pi \sqrt{j}\,(\sqrt{T+H}- \sqrt{T-H}\,)\bigr)\bigr| \\ &\qquad+O\biggl(\frac{T^{1/4}}{H}+\frac{1}{T^{1/4}}\biggr). \end{aligned} \end{equation} \tag{10.6} $$
Estimate (10.2) is a direct consequence of (10.6) in view of the obvious inequalities
$$ \begin{equation*} \sqrt{T+H}-\sqrt{T-H} \leqslant \frac{4}{3}\,\frac{H}{\sqrt{T}} \quad\text{for}\ \ H<\frac{T}{2} \end{equation*} \notag $$
and
$$ \begin{equation*} |\!\sin x|<\begin{cases} x, & x \leqslant 1, \\ 1, & x \geqslant 1, \end{cases} \end{equation*} \notag $$
and since
$$ \begin{equation*} \sum_{j \leqslant x}\frac{r(j)}{j^{3/4}}=4\pi x^{1/4}+O(1) \quad\text{and}\quad \sum_{j \geqslant x}\frac{r(j)}{j^{5/4}}=4\pi x^{-1/4}+O(1). \end{equation*} \notag $$

II. Assertion II (inequality (10.3)) is a corollary to Theorem 6.6, since

$$ \begin{equation*} E_2(T,H)=\frac{1}{2H}\,[B(T+H)^{3/2}-B(T-H)^{3/2}]+O(T(\ln T)^2) \end{equation*} \notag $$
by Theorem 6.6 and therefore, for $H \ll T$,
$$ \begin{equation} E_2(T,H)=\frac{3}{2}\, BT^{1/2}+O\biggl(\frac{T(\ln T)^2}{H}+ \frac{H}{\sqrt{T}}\biggr). \end{equation} \tag{10.7} $$

III. Let us prove assertion III of the theorem. In the rest of this section we put

$$ \begin{equation} E(f)=\frac{1}{2H}\int_{T-H}^{T+H}f^4(x)\,dx. \end{equation} \tag{10.8} $$
We write the truncated Voronoi formula (5.1) as
$$ \begin{equation*} \begin{gathered} \, P(x)=P_{N}(x)+\Delta_{N}(P(x)), \\ \Delta_{N}(P(x)) \ll \frac{x^{1/2+\varepsilon}}{\sqrt{N}}+N^\varepsilon, \end{gathered} \end{equation*} \notag $$
and take
$$ \begin{equation} N=[T]. \end{equation} \tag{10.9} $$
In this case
$$ \begin{equation} P(x) \ll |P_{N}(x)|+T^\varepsilon. \end{equation} \tag{10.10} $$
By Hölder’s inequality,
$$ \begin{equation} \biggl(\,\sum_{i=1}^k x_i\biggr)^p \leqslant k^{p-1}\sum_{j=1}^k|x_j|^p,\qquad p>1. \end{equation} \tag{10.11} $$
Hence
$$ \begin{equation} P^4(x) \ll |P_{N}(x)|^4+T^\varepsilon \end{equation} \tag{10.12} $$
and now, by (5.1) we have
$$ \begin{equation} P_{N}(x)=-\frac{x^{1/4}}{\pi}\sum_{j=1}^N \frac{r(j)}{j^{3/4}} \cos\biggl(2\pi\sqrt{jx}+\frac{\pi}{4}\biggr). \end{equation} \tag{10.13} $$
We set
$$ \begin{equation} P_{N'N''}(x)=-\frac{x^{1/4}}{\pi} \sum_{N'<j\leqslant N''} \frac{r(j)}{j^{3/4}}\cos\biggl(2\pi\sqrt{jx}+\frac{\pi}{4}\biggr), \end{equation} \tag{10.14} $$
and write $P_{N}(x)$ as
$$ \begin{equation} P_{N}(x)=P_{N_0}(x)+P_{N_0N_1}(x)+P_{N_1N}(x). \end{equation} \tag{10.15} $$
We require the estimate
$$ \begin{equation} E(P^4) \ll E(P_{N_0}^4)+E(P_{N_0N_1}^4)+E(P_{N_1N}^4)+T^\varepsilon. \end{equation} \tag{10.16} $$
The quantity$N_1$ will be chosen later.

We have

$$ \begin{equation} P_{N_0}(x) \ll x^{1/4}\sum_{j=1}^{N_0} \frac{r(j)}{j^{3/4}} \ll x^{1/4} N_0^{1/4}, \end{equation} \tag{10.17} $$
and so, choosing
$$ \begin{equation} N_0=[T^\varepsilon], \end{equation} \tag{10.18} $$
we find that
$$ \begin{equation} E(P_{N_0}^4) \ll T^{1+\varepsilon}. \end{equation} \tag{10.19} $$
Therefore,
$$ \begin{equation} E(P^4) \ll T^{1+\varepsilon}+E(P_{N_0N_1}^4)+E(P_{N_1N}^4). \end{equation} \tag{10.20} $$

Let us estimate the quantity $E(P_{N_0N_1}^4)$. To do this we write $P_{N_0N_1}(x)$ as

$$ \begin{equation} \begin{aligned} \, P_{N_0N_1}(x)=\sum_{q=0}^{Q-1}P_{M(q),2M(q)}(x),\qquad M(q)=\biggl[\frac{N_1}{2^{q+1}}\biggr]. \end{aligned} \end{equation} \tag{10.21} $$

Let $Q$ be such that

$$ \begin{equation*} M(Q-1)=N_0. \end{equation*} \notag $$
Then
$$ \begin{equation} Q=\biggl[\frac{1}{\ln 2}\ln\frac{N_1}{N_0}\biggr] \ll \ln N_1. \end{equation} \tag{10.22} $$
Using (10.11)) we have
$$ \begin{equation*} P_{N_0N_1}^4(x) \ll (\ln N_1)^3\sum_{q=0}^{Q-1}P^4_{M(q),2M(q)}(x), \end{equation*} \notag $$
and therefore
$$ \begin{equation} E(P_{N_0N_1}^4) \ll (\ln N_1)^4\max_{N_0<M \leqslant N_1}E(P^4_{M,2M}). \end{equation} \tag{10.23} $$
Let us estimate the quantity $E(P^4_{M,2M})$.

Consider a function $\varphi \in C_0^\infty(\mathbb{R})$ such that

$$ \begin{equation} \begin{gathered} \, 0 \leqslant \varphi(x) \leqslant 1, \\ \begin{aligned} \, \varphi(x)&=1 \quad\text{if}\ \ |x-T|\leqslant H, \\ \varphi(x)&=0 \quad\text{if}\ \ |x-T|\geqslant 2H, \end{aligned} \\ |\varphi^{(p)}(x)|\leqslant C(p)H^{-p}. \end{gathered} \end{equation} \tag{10.24} $$
Such a function exists (see the comments at the end of this chapter). We also have
$$ \begin{equation} C(p) \leqslant C\, 2^{2p^2},\qquad p \geqslant 1. \end{equation} \tag{10.25} $$
By the properties of the function $\varphi(x)$,
$$ \begin{equation} E(P_{M,2M}^4) \ll \frac{1}{H}\int_{T-2H}^{T+2H}\varphi(x)P^4_{M,2M}(x)\,dx. \end{equation} \tag{10.26} $$
We set
$$ \begin{equation} a_4(j)=\prod_{\alpha=1}^4 \frac{r(j_\alpha)}{j_\alpha^{3/4}}\,,\qquad j=(j_1,j_2,j_3,j_4), \end{equation} \tag{10.27} $$
and write $P^4_{M,2M}(x)$ as
$$ \begin{equation} P^4_{M,2M}(x)=\frac{x}{\pi^4}\sum_{M<j_\alpha \leqslant 2M} a_4(j)\prod_{\alpha=1}^4 \cos\biggl(2\pi\sqrt{j_\alpha x}+ \frac{\pi}{4}\biggr). \end{equation} \tag{10.28} $$
Note that
$$ \begin{equation*} \begin{aligned} \, \prod_{\alpha=1}^4 \cos\biggl(2\pi\sqrt{j_\alpha x}+\frac{\pi}{4}\biggr)&= -\frac{1}{8}\cos\bigl(2\pi\lambda_1(j)\sqrt{x}\,\bigr)- \frac{1}{2}\sin\bigl(2\pi\lambda_2(j)\sqrt{x}\,\bigr) \\ &\qquad+\frac{3}{8}\cos\bigl(2\pi\lambda_3(j)\sqrt{x}\,\bigr), \end{aligned} \end{equation*} \notag $$
where the quantities $\lambda_k(j)$, $k=1,2,3$, are defined by
$$ \begin{equation} \begin{aligned} \, \lambda_1(j)&=\sqrt{j_1}+\sqrt{j_2}+\sqrt{j_3}+\sqrt{j_4}\,, \\ \lambda_2(j)&=\sqrt{j_1}+\sqrt{j_2}+\sqrt{j_3}-\sqrt{j_4}\,, \\ \lambda_3(j)&=\sqrt{j_1}+\sqrt{j_2}-\sqrt{j_3}-\sqrt{j_4}\,. \end{aligned} \end{equation} \tag{10.29} $$
Now from (10.26) we obtain
$$ \begin{equation} E(P_{M,2M}^4) \ll |E^{(1)}(M)|+|E^{(2)}(M)|+|E^{(3)}(M)|, \end{equation} \tag{10.30} $$
where
$$ \begin{equation} E^{(k)}(M)=\sum_{M<j_\alpha \leqslant 2M}\frac{a_4(j)}{H} \int_{T-2H}^{T+2H}x\varphi(x)e^{2\pi i\lambda_k(j)\sqrt{x}}\,dx,\quad k=1,2,3. \end{equation} \tag{10.31} $$
Consider the integral
$$ \begin{equation*} J(\lambda)=\int_{T-2H}^{T+2H}x\varphi(x)e^{i\lambda\sqrt{x}}\,dx. \end{equation*} \notag $$
Integrating by parts, for all $p \geqslant 1$ we have
$$ \begin{equation} |J(\lambda)|\leqslant \frac{2^p}{|\lambda|^p}\int_{T-2H}^{T+2H} |\varphi_p(x)|\,dx, \end{equation} \tag{10.32} $$
where
$$ \begin{equation*} \varphi_p(x)=\frac{d}{dx}\bigl(\sqrt{x}\,\varphi_{p-1}(x)\bigr)\quad (p \geqslant 2)\quad\text{and}\quad \varphi_1(x)=\frac{d}{dx}\bigl(x^{3/2}\varphi(x)\bigr). \end{equation*} \notag $$
Using (10.24)) we obtain the estimate
$$ \begin{equation} |\varphi_p(x)|\leqslant CT\cdot\biggl(\frac{T^{1/2}}{H}\biggr)^p C(p) p, \end{equation} \tag{10.33} $$
and therefore, for each $p \geqslant 1$,
$$ \begin{equation} |J(\lambda)|\leqslant CH\cdot\biggl(\frac{2}{\lambda}\biggr)^p p\, C(p)\, T\cdot\biggl(\frac{T^{1/2}}{H}\biggr)^p. \end{equation} \tag{10.34} $$
We have
$$ \begin{equation} \lambda_k(j)\geqslant \sqrt{M}\,,\qquad k=1,2, \end{equation} \tag{10.35} $$
and so, using (10.34), for $k=1,2$ we have
$$ \begin{equation} E^{(k)}(M)\leqslant \sum_{M<j_\alpha \leqslant 2M}a_4(j) C\,\frac{pC(p)}{\pi^p}\,T\cdot \biggl(\frac{T^{1/2}}{H\sqrt{M}}\biggr)^p \end{equation} \tag{10.36} $$
for each $p \geqslant 1$. By (10.27)
$$ \begin{equation} a_4(j) \ll M^{-3+\varepsilon}. \end{equation} \tag{10.37} $$
In the case we are interested in
$$ \begin{equation} M \geqslant N_0=T^{\varepsilon}. \end{equation} \tag{10.38} $$
Assume that
$$ \begin{equation} H \geqslant T^{1/2}. \end{equation} \tag{10.39} $$
Now from (10.36) we obtain
$$ \begin{equation} E^{(k)}(M)\ll \sum_{M<j_\alpha \leqslant 2M}\frac{M^{\varepsilon}}{M^3}\, \frac{pC(p)}{\pi^p}\,T\cdot T^{-\varepsilon p},\qquad k=1,2. \end{equation} \tag{10.40} $$
Setting $p=q\varepsilon^{-1}$, where $q$ is sufficiently large, we have
$$ \begin{equation} E^{(k)}(M) \ll T^{1+\varepsilon},\qquad k=1,2. \end{equation} \tag{10.41} $$
Thus, in (10.30) it remains to estimate $E^{(3)}(M)$.

We write this quantity as

$$ \begin{equation} E^{(3)}(M)=E_0^{(3)}(M)+E_1^{(3)}(M), \end{equation} \tag{10.42} $$
where (see (10.31))
$$ \begin{equation} E_0^{(3)}(M)=\sum_{\substack{M<j_\alpha\leqslant 2M \\ |\lambda_3(j)|\leqslant\Delta}}\frac{a_4(j)}{H}\int_{T-2H}^{T+2H} x\varphi(x)e^{2\pi i\lambda_3(j)\sqrt{x}}\,dx \end{equation} \tag{10.43} $$
and
$$ \begin{equation} E_1^{(3)}(M)=\sum_{\substack{M<j_\alpha\leqslant 2M \\ |\lambda_3(j)|>\Delta}}\frac{a_4(j)}{H}\int_{T-2H}^{T+2H} x\varphi(x)e^{2\pi i\lambda_3(j)\sqrt{x}}\,dx. \end{equation} \tag{10.44} $$
The quantity $E_1^{(3)}(M)$ is estimated similarly to $E^{(k)}(M)$, $k=1,2$. Indeed, choosing
$$ \begin{equation} \Delta=\frac{T^{1/2+\varepsilon}}{H} \end{equation} \tag{10.45} $$
we have
$$ \begin{equation} E_1^{(3)}(M) \ll T^{1+\varepsilon}. \end{equation} \tag{10.46} $$

Let us estimate $E_0^{(3)}(M)$. The integral on the right-hand side of (10.43) is trivially estimated as follows:

$$ \begin{equation} \biggl|\int_{T-2H}^{T+2H}x\varphi(x)e^{2\pi i\lambda_3(j)\sqrt{x}}\,dx\biggr| \leqslant CTH. \end{equation} \tag{10.47} $$
Using estimate (10.37) we find that
$$ \begin{equation} E_0^{(3)}(M) \ll \frac{TM^\varepsilon}{M^3}\mathcal{N}_4(M,\Delta), \end{equation} \tag{10.48} $$
where $\mathcal{N}_4(M,\Delta)$ is the number of solutions of the inequality
$$ \begin{equation} \bigl|\sqrt{j_1}+\sqrt{j_2}-\sqrt{j_3}-\sqrt{j_4}\,\bigr|\leqslant\Delta \end{equation} \tag{10.49} $$
such that $M<j_\alpha\leqslant 2M$, $j_\alpha \in \mathbb{Z}$. It is known that
$$ \begin{equation} \mathcal{N}_4(M,\Delta) \ll M^{2+\varepsilon}+\Delta M^{7/2+\varepsilon}. \end{equation} \tag{10.50} $$
This estimate is sharp, and therefore
$$ \begin{equation} \mathcal{N}_4(M,\Delta) \ll \Delta M^{7/2+\varepsilon}\qquad (\Delta \gg M^{-3/2}). \end{equation} \tag{10.51} $$
For $\Delta \geqslant M^{-1/2}$ the proof of this estimate is easy. Let
$$ \begin{equation*} A=A(j_1,j_2,j_3)=\sqrt{j_1}+\sqrt{j_2}-\sqrt{j_3}. \end{equation*} \notag $$
Then
$$ \begin{equation*} (2-\sqrt{2}\,)\sqrt{M}<A<(2\sqrt{2}-1)\sqrt{M}\,. \end{equation*} \notag $$
So, if $|\lambda_3(j)|=|A-\sqrt{j_4}\,|\leqslant\Delta$, then
$$ \begin{equation*} A^2-2A\Delta+\Delta^2 \leqslant j_4 \leqslant A^2+2A\Delta+\Delta^2. \end{equation*} \notag $$
Therefore, for $A\Delta \gg 1$ and fixed $j_1$, $j_2$, and $j_3$, the inequality $|\lambda_3(j)| \leqslant \Delta$ has $\mathcal{N}_1 \asymp \Delta\sqrt{M}$ solutions. The number $\mathcal{N}_2$ of possible values of $A$ is majorized by $CM^3$. Hence
$$ \begin{equation} \mathcal{N}_4(M,\Delta) \leqslant C\mathcal{N}_1\mathcal{N}_2 \leqslant C\Delta M^{7/2}\qquad (\Delta \geqslant CM^{-1/2}). \end{equation} \tag{10.52} $$
Now from (10.45) and (10.48) we obtain
$$ \begin{equation} E_0^{(3)}(M) \ll TM^{1/2+\varepsilon}\Delta \ll \frac{T^{3/2+\varepsilon}}{H}\,M^{1/2}. \end{equation} \tag{10.53} $$

Thus,

$$ \begin{equation} E^{(3)}(M) \ll T^{1+\varepsilon}+\frac{T^{3/2+\varepsilon}}{H}\,M^{1/2}, \end{equation} \tag{10.54} $$
and now by (10.30)
$$ \begin{equation} E(P_{M,2M}^4) \ll T^{1+\varepsilon}+\frac{T^{3/2+\varepsilon}}{H}\,M^{1/2}\qquad (H \geqslant T^{1/2}). \end{equation} \tag{10.55} $$
Using this estimate in (10.23) and since $M \leqslant N_1$) we have
$$ \begin{equation} E(P_{N_0N_1}^4) \ll T^{1+\varepsilon}+\frac{T^{3/2+\varepsilon}}{H}\,N_1^{1/2}. \end{equation} \tag{10.56} $$
In order to estimate $E(P^4)$ it remains (see (10.20)) to estimate $E(P_{N_1N}^4)$ and choose $N_1$.

Let us estimate the quantity $E(P_{N_1N}^4)$. First we note that

$$ \begin{equation} P_{N_1N}(x)=P_N(x)-P_{N_1}(x)=\Delta_{N_1}(P(x))+\Delta_{N}(P(x)) \end{equation} \tag{10.57} $$
and therefore
$$ \begin{equation} P_{N_1N}(x) \ll \frac{x^{1/2+\varepsilon}}{\sqrt{N_1}}\qquad (N_1<N). \end{equation} \tag{10.58} $$
We use the estimates
$$ \begin{equation} E(P_{N_1N}^4) \ll \frac{T^{1+\varepsilon}}{N_1}\,E(P_{N_1N}^2). \end{equation} \tag{10.59} $$
Since
$$ \begin{equation} E(P_{N_1N}^2) \ll (\ln N)^2\max_{N_1<M \leqslant N} E(P_{M,2M}^2), \end{equation} \tag{10.60} $$
we need to estimate $E(P_{M,2M}^2)$. We write this quantity in the form
$$ \begin{equation} E(P_{M,2M}^2)=E_{\rm d}(M)+E_{\rm nd}(M), \end{equation} \tag{10.61} $$
where
$$ \begin{equation*} E_{\rm d}(M)=\frac{1}{\pi^2}\sum_{M < j \leqslant 2M}\frac{r^2(j)}{j^{3/2}}\, \frac{1}{2H}\int_{T-H}^{T+H}x^{1/2}\cos^2\biggl(2\pi\sqrt{jx}+ \frac{\pi}{n}\biggr)\,dx. \end{equation*} \notag $$
The last quantity is easily estimated as follows:
$$ \begin{equation} E_{\rm d}(M) \leqslant C\,\frac{\overline{r}^2(2M)}{M^{3/2}}\, T^{1/2}M\leqslant C\overline{r}^2(M)\frac{T^{1/2}}{M^{1/2}}\,. \end{equation} \tag{10.62} $$
On the other hand
$$ \begin{equation*} E_{\rm nd}(M)=\frac{2}{H\pi^2}\sum_{M < j \leqslant 2M} \frac{r(j)}{j^{3/4}}\sum_{\substack{M < l \leqslant 2M \\ l>j}} \frac{r(l)}{l^{3/4}}\,J_{jl}, \end{equation*} \notag $$
where
$$ \begin{equation*} J_{jl}=\int_{\sqrt{T-H}}^{\sqrt{T+H}} t^2\bigl[\cos\bigl(2\pi(\sqrt{l}-\sqrt{j}\,)t\bigr)- \sin\bigl(2\pi(\sqrt{l}+\sqrt{j}\,)t\bigr)\bigr]\,dt. \end{equation*} \notag $$
Using the second mean-value theorem we find that
$$ \begin{equation*} |J_{jl}| \leqslant C\frac{T}{\sqrt{l}-\sqrt{j}}\,. \end{equation*} \notag $$
Therefore,
$$ \begin{equation} E_\mathrm{nd}(M)\leqslant C\,\frac{T}{H}\,r^2(M)\sum_{M < j \leqslant 2M} \frac{1}{j^{3/4}}\,\frac{\ln j}{j^{1/4}} \leqslant C\,\frac{T}{H}\,\overline{r}^2(M)(\ln M)^2. \end{equation} \tag{10.63} $$
So we have
$$ \begin{equation} E(P_{M,2M}^2)\leqslant C\biggl(\overline{r}^2(M)\frac{T^{1/2}}{M^{1/2}}+ \overline{r}^2(M)(\ln M)^2\frac{T}{H}\biggr) \end{equation} \tag{10.64} $$
and now an appeal to (10.59) shows that
$$ \begin{equation} E(P_{N_1N}^4) \ll \frac{T^{1+\varepsilon}}{N_1} \biggl(\frac{T^{1/2}}{N_1^{1/2}}+\frac{T}{H}\biggr). \end{equation} \tag{10.65} $$
Using this estimate, (10.56), and (10.20) we obtain
$$ \begin{equation} E(P^4) \ll T^{1+\varepsilon}+\frac{T^{3/2+\varepsilon}}{H}\,N_1^{1/2}+ \frac{T^{1+\varepsilon}}{N_1}\biggl(\frac{T^{1/2}}{N_1^{1/2}}+ \frac{T}{H}\biggr) \end{equation} \tag{10.66} $$
for $H \geqslant T^{1/2}$. Let $TH^{-1} \geqslant T^{1/2}N_1^{-1/2}$, that is, let
$$ \begin{equation*} N_1 \geqslant \frac{H^2}{T}\,. \end{equation*} \notag $$
Then
$$ \begin{equation*} E(P^4) \ll T^{1+\varepsilon}+\frac{T^{3/2+\varepsilon}}{H}\,N_1^{1/2}+ \frac{T^{2+\varepsilon}}{N_1H}\,. \end{equation*} \notag $$
Now, setting $N_1=T^{1/3}$ we obtain the estimate
$$ \begin{equation} E(P^4) \ll T^{1+\varepsilon}+\frac{T^{5/3+\varepsilon}}{H}\qquad (H \leqslant T^{2/3}). \end{equation} \tag{10.67} $$
If $N_1 \leqslant H^2 T^{-1}$, then
$$ \begin{equation*} E(P^4) \ll T^{1+\varepsilon}+\frac{T^{2/3+\varepsilon}}{H}\,N_1^{1/2}+ \frac{T^{3/2+\varepsilon}}{N_1^{3/2}}\,. \end{equation*} \notag $$
Choosing $N_1=\sqrt{T}$ , we find that
$$ \begin{equation} E(P^4) \ll T^{1+\varepsilon}+\frac{T^{3/2+\varepsilon}}{M^{3/4}}\ll T^{1+\varepsilon}\qquad (H \geqslant T^{2/3}). \end{equation} \tag{10.68} $$
Recall that $H \geqslant T^{1/2}$ (see (10.39)). Now the required result (10.4) follows from (10.68) and (10.67).

IV. Let us prove assertion IV. We argue as in the proof of assertion III. Since $E(P_{N_0}^6) \ll T^{3/2+\varepsilon}$, we have the following analogue of estimate (10.20):

$$ \begin{equation} E(P^6) \ll T^{3/2+\varepsilon}+E(P_{N_0N_1}^6)+E(P_{N_1N}^6); \end{equation} \tag{10.69} $$
in addition, we have (see (10.23))
$$ \begin{equation} E(P_{N_0N_1}^6) \ll (\ln N_1)^6\max_{N_0<M\leqslant N_1}E(P_{M,2M}^6). \end{equation} \tag{10.70} $$
Note that
$$ \begin{equation*} P_{N_0N_1}^6=\frac{x^{3/2}}{\pi^4}\sum_{M<j_\alpha \leqslant 2M} a_6(j)\prod_{\alpha=1}^6 \cos\biggl(2\pi\sqrt{j_\alpha x}+\frac{\pi}{4}\biggr), \end{equation*} \notag $$
and, by analogy with (10.30),
$$ \begin{equation} E(P_{M,2M}^6) \ll \sum_{k=1}^4 |E^{(k)}(M)|. \end{equation} \tag{10.71} $$
In addition,
$$ \begin{equation} E^{(k)}(M)=\sum_{M<j_\alpha \leqslant 2M}\frac{a_6(j)}{H}\int_{T-2H}^{T+2H} x^{3/2}\varphi(x)e^{2\pi i \lambda_k(j)\sqrt{x}}\,dx, \end{equation} \tag{10.72} $$
where
$$ \begin{equation} \begin{aligned} \, \lambda_1(j)&=\sqrt{j_1}+\sqrt{j_2}+\sqrt{j_3}+\sqrt{j_4}+\sqrt{j_5}+ \sqrt{j_6}\,, \\ \lambda_2(j)&=\sqrt{j_1}+\sqrt{j_2}+\sqrt{j_3}+\sqrt{j_4}+\sqrt{j_5}- \sqrt{j_6}\,, \\ \lambda_3(j)&=\sqrt{j_1}+\sqrt{j_2}+\sqrt{j_3}+\sqrt{j_4}-\sqrt{j_5}- \sqrt{j_6}\,, \\ \lambda_4(j)&=\sqrt{j_1}+\sqrt{j_2}+\sqrt{j_3}-\sqrt{j_4}-\sqrt{j_5}- \sqrt{j_6} \end{aligned} \end{equation} \tag{10.73} $$
and
$$ \begin{equation} a_6(j)=\prod_{\alpha=1}^6 \frac{r(j_\alpha)}{j_\alpha^{3/4}}\,,\qquad j=(j_1,j_2,\dots,j_6). \end{equation} \tag{10.74} $$
Since
$$ \begin{equation*} \lambda_k(j) \geqslant C\sqrt{M}\,,\qquad k=1,2,3, \end{equation*} \notag $$
the quantities $E^{(k)}(M)$, $k=1,2,3$, are estimated similarly to their analogues above (see (10.41)). So, for $H \geqslant T^{1/2}$ (see (10.39))
$$ \begin{equation} E^{(k)}(M) \ll T^{3/2+\varepsilon}+|E^{(4)}(M)|\quad\text{and}\quad E^{(4)}(M) = E_0^{(4)}(M)+E_1^{(4)}(M). \end{equation} \tag{10.75} $$
Similarly to (10.42), the quantity $E_0^{(4)}(M)$ is the part of the sum $E^{(4)}(M)$ (see (10.72)) corresponding to the $j_\alpha$ such that $|\lambda_4(j)| \leqslant \Delta$, while $E_1^{(4)}(M)$ corresponds to the $j_\alpha$ such that $|\lambda_4(j)|> \Delta$. For $\Delta=T^{1/2+\varepsilon}H^{-1}$ (see (10.45)) we have the following analogue of estimate (10.46):
$$ \begin{equation} E_1^{(4)}(M) \ll T^{3/2+\varepsilon}. \end{equation} \tag{10.76} $$
On the other hand (see (10.48))
$$ \begin{equation} E_0^{(4)}(M) \ll \frac{T^{3/2+\varepsilon}}{M^{9/2}}\,\mathcal{N}_6(M,\Delta), \end{equation} \tag{10.77} $$
here $\mathcal{N}_6(M,\Delta)$ is the number of solutions of the inequality
$$ \begin{equation*} |\lambda_4(j)| \leqslant \Delta,\quad\text{where}\ \ \Delta=T^{1/2+\varepsilon}H^{-1},\ \ H \geqslant T^{1/2}. \end{equation*} \notag $$
An analogue of estimate (10.52) reads $\mathcal{N}_6(M,\Delta) \leqslant CM^{11/2}$. Hence (see (10.53))
$$ \begin{equation} E_0^{(4)}(M) \ll \frac{T^{2+\varepsilon}M}{H} \end{equation} \tag{10.78} $$
and, correspondingly (see (10.56)),
$$ \begin{equation} E(P_{N_0N_1}^6) \ll T^{3/2+\varepsilon}+\frac{T^{2+\varepsilon}N_1}{H}\,. \end{equation} \tag{10.79} $$
It remains to estimate $E(P_{N_1N}^6)$ in (10.69). We start with the estimate
$$ \begin{equation} E(P_{N_1N}^6) \ll \frac{T^{1+\varepsilon}}{N_1}\,E_{N_1N}(P^4). \end{equation} \tag{10.80} $$
Note that
$$ \begin{equation*} E_{N_1N}(P^4) \ll T^{1+\varepsilon}+\frac{T^{5/3+\varepsilon}}{H} \end{equation*} \notag $$
and so
$$ \begin{equation} E(P_{N_1N}^6) \ll \frac{T^{2+\varepsilon}}{N_1}+ \frac{T^{8/3+\varepsilon}}{N_1H}\,. \end{equation} \tag{10.81} $$
In view of (10.81) and (10.79), estimate (10.69) assumes the form
$$ \begin{equation} E(P^6) \ll T^{3/2+\varepsilon}+\frac{T^{2+\varepsilon}N_1}{H}+ \frac{T^{2+\varepsilon}}{N_1}+\frac{T^{8/3+\varepsilon}}{N_1H}\,. \end{equation} \tag{10.82} $$
If $H \leqslant T^{2/3}$, then choosing $N_1=T^{1/3}$ we find that
$$ \begin{equation} E(P^6) \ll T^{3/2+\varepsilon}+\frac{T^{7/3+\varepsilon}}{H}\,. \end{equation} \tag{10.83} $$
For $H \geqslant T^{2/3}$, letting $N_1=\sqrt{H}$ we have
$$ \begin{equation} E(P^6) \ll T^{3/2+\varepsilon}+\frac{T^{2+\varepsilon}}{\sqrt{H}} \ll \frac{T^{2+\varepsilon}}{\sqrt{H}}\,. \end{equation} \tag{10.84} $$

This proves estimate (10.5) and completes the proof of Theorem 10.10.

Note that the estimate $E(P^6) \ll T^{3/2+\varepsilon}$ for $H \geqslant T^{1-\beta}$, $\beta>0$, is still unproved.

11. Local means of $|P(x)|^m$ and estimates for $|P(x)|$

The absolute local means of the function $|f(x)|^m$ are defined by

$$ \begin{equation} E(|f|^m):=\frac{1}{2H}\int_{T-H}^{T+H} |f(x)|^m\,dx. \end{equation} \tag{11.1} $$

In this section we find a relationship between estimates for the quantities $E(|P|^m)$ and ones for $|P(x)|$, for $|x-T|<H$ ($H<T $).

We need some definitions. Consider the set

$$ \begin{equation*} \mathcal{K}=\{n \in \mathbb{Z}_+\colon n \geqslant n_0,\, r(n) \ne 0\}. \end{equation*} \notag $$
We assume by definition that
$$ \begin{equation*} P(n)=P(n+0), \quad n\in\mathcal{K} \quad\text{if}\ \ P(n+0)>0, \end{equation*} \notag $$
and
$$ \begin{equation*} |P(n)|=\max \{ |P(n-0)|, |P(n+0)|, \quad n\in\mathcal{K} \quad\text{if}\ \ P(n+0)<0. \end{equation*} \notag $$
For $x\not\in\mathcal{K}$ we have $P^{(1)}(x)=-\pi$, and so the maxima of $|P(x)|$ lie in $\mathcal{K}$. Let (see (6.6) and (6.7))
$$ \begin{equation} \begin{gathered} \, P(n)=P_N(n)+\Delta_N P(n), \\ |\Delta_N P(n)| \ll \Delta_N(n),\qquad \Delta_N(n)=\sqrt{\frac{n}{N}}+\overline{r}(N)\ln N. \end{gathered} \end{equation} \tag{11.2} $$

We say that a quantity $f(x)$ changes slowly for $x \geqslant x_0$ if

$$ \begin{equation*} \biggl|f\biggl(x+\frac{1}{2}\,x\biggr)\biggr|<2|f(x)|. \end{equation*} \notag $$
Note that the quantities $\overline{r}(x)$ and $\Delta_N(n)$ change slowly.

Theorem 11. Let $P(T) \gg \overline{r}(T)$,

$$ \begin{equation} E(|P|^m) \leqslant F_m \equiv F_m(T,H), \qquad m \geqslant 1, \end{equation} \tag{11.3} $$
and let
$$ \begin{equation} F_m \leqslant C_m\bigl(H\overline{r}(T)\bigr)^m \end{equation} \tag{11.4} $$
and
$$ \begin{equation} |P(T)| \leqslant C_m^1 H\overline{r}(T). \end{equation} \tag{11.5} $$
Then
$$ \begin{equation} |P(T)| \leqslant C_m^2\bigl(F_m H\overline{r}(T)\bigr)^{1/(m+1)} \end{equation} \tag{11.6} $$
and the constants $C_m$, $C_m^1$, and $C_m^2$ can be explicitly specified.

Proof. Let us introduce the quantity $s(T)$, which characterizes the behaviour of $P(x)$ near $x=T$. By definition,
$$ \begin{equation} s(T)=\min\{s_0,2\}, \end{equation} \tag{11.7} $$
where $s_0=s_0(T)$ is the greatest positive number such that
$$ \begin{equation} |P(x)-P(T)| \leqslant \frac{1}{2}\,|P(T)|, \end{equation} \tag{11.8} $$
provided that
$$ \begin{equation} |x-T| \leqslant C_1\,\frac{|P(T)|^{s_0(T)}}{\overline{r}(T)}=:\delta. \end{equation} \tag{11.9} $$
From Theorem B.1 (see Appendix B) for $A(x)=P(x)$, $\lambda(T)=1/2$, and $a(T)=\overline{r}(T)$ it follows that if
$$ \begin{equation} F_m \leqslant C_2(m,s)\bigl(H\overline{r}(T)\bigr)^{m/s} \end{equation} \tag{11.10} $$
and
$$ \begin{equation} |P(T)| \leqslant C_3(m,s)\bigl(H\overline{r}(T)\bigr)^{1/s}, \end{equation} \tag{11.11} $$
then for $0<s\leqslant s(T)$
$$ \begin{equation} |P(T)|\leqslant C_4(m,s)\bigl(F_m H\overline{r}(T)\bigr)^{1/(m+s)}. \end{equation} \tag{11.12} $$
We have $P(x)=\sum_{n \leqslant x}r(n)-\pi x$ and $\delta \gg 1$, so that
$$ \begin{equation} |P(x)-P(T)| \leqslant \bigl(\pi+\overline{r}(T+\delta)\bigr)\delta. \end{equation} \tag{11.13} $$
By (11.9) we have the inequality $\delta \leqslant C_1|P(T)|^2/\overline{r}(T)$, and so, using the estimate $|P(T)|\leqslant CT^{1/3}\overline{r}^{1/3}(T)$, we find that $\delta \leqslant T$. The quantity $\overline{r}(T)$ varies slowly, and $\pi+\overline{r}(T+\delta) \leqslant 2\overline{r}(T)$. From (11.13) we have
$$ \begin{equation} |P(x)-P(T)| \leqslant 2\overline{r}(T)\delta, \end{equation} \tag{11.14} $$
and therefore
$$ \begin{equation} |P(x)-P(T)| \leqslant \frac{1}{2}\,|P(T)|,\quad\text{if}\ \ |x-T|<\delta, \end{equation} \tag{11.15} $$
where
$$ \begin{equation} \delta=C_1\,\frac{|P(T)|}{\overline{r}(T)}\,,\qquad C_1 \leqslant \frac{1}{4}\,. \end{equation} \tag{11.16} $$

So we have

$$ \begin{equation} s(T) \geqslant 1. \end{equation} \tag{11.17} $$
The required result now follows from (11.10)(11.12) for $s=1$. This completes the proof of Theorem 11.11.

From the results of § 10 it follows that

$$ \begin{equation} F_4\leqslant T^{1+\varepsilon}+\frac{T^{5/3+\varepsilon}}{H} \qquad (H \geqslant T^{1/2}) \end{equation} \tag{11.18} $$
and
$$ \begin{equation} F_6\leqslant T^{3/2+\varepsilon}+\frac{T^{7/3+\varepsilon}}{H} \qquad (T^{1/2} \leqslant H \leqslant T^{2/3}). \end{equation} \tag{11.19} $$
In this case (10.6) yields only the trivial estimate $|P(T)|\ll T^{1/3+\varepsilon}$. On the other hand, any refinement of estimates (11.18) and (11.19) would lead to a non-trivial estimate. In particular, if we conjecture that
$$ \begin{equation*} E_4(T,H) \ll T^{1+\varepsilon}\qquad (H \geqslant T^{1/2}), \end{equation*} \notag $$
then it would follow from (10.20) for $H=T^{1/2+\varepsilon}$ that
$$ \begin{equation*} |P(T)| \ll T^{3/10+\varepsilon}. \end{equation*} \notag $$
This estimate is stronger than any estimate available at present.

If we conjecture that $|P(x)-P(T)|\leqslant C\overline{r}(T)\sqrt{\delta}$ , then we could put $s=2$ in (11.10)(11.12).

If we conjecture that $s(T)$ is 2 for $|P(T)|>CT^{1/4}$, and use estimate (11.6) for $m=2$, then we would obtain a solution to the circle problem.

12. Jutila integral

The Jutila integral is the second local moment of $P(x+U)-P(x)$. Thus, in this section we consider the quantity

$$ \begin{equation} Q \equiv Q(T,U,H)=\int_T^{T+H}[P(x+U)-P(x)]^2\,dx. \end{equation} \tag{12.1} $$
We assume that
$$ \begin{equation} H \leqslant T,\quad 1 \ll U \ll T,\quad T\gg 1. \end{equation} \tag{12.2} $$
Consider the quantities
$$ \begin{equation} \widetilde{Q}_0=\frac{1}{2\pi^2}\sum_{1 \leqslant n \leqslant N} \frac{r^2(n)}{n^{3/2}}\int_T^{T+H}\sqrt{x}\, \bigl|\exp\{2\pi i\sqrt{n}\,(\sqrt{x+U}-\sqrt{x}\,)\}-1\bigr|^2\,dx \end{equation} \tag{12.3} $$
and
$$ \begin{equation} Q_0=\frac{1}{2\pi^2}\sum_{1 \leqslant n \leqslant N} \frac{r^2(n)}{n^{3/2}}\int_T^{T+H}\sqrt{x}\, \biggl|\exp\biggl\{\pi iU\sqrt{\frac{n}{x}}\,\biggr\}-1\biggr|^2\,dx, \end{equation} \tag{12.4} $$
where $N$ is defined by
$$ \begin{equation} N=\frac{T}{4U}\,. \end{equation} \tag{12.5} $$

Theorem .12. I. If $H \gg T^{1/2}(\ln T)^2$, then

$$ \begin{equation} Q=\widetilde{Q}_0+O\biggl(T(\ln T)^2+H\sqrt{U}\,\ln\frac{T}{U}+ HT^{1/4}\varphi(T)\biggr), \end{equation} \tag{12.6} $$
and if $H \ll T^{1/2}(\ln T)^2$ and $U \ll T(\ln T)^{-2}$, then
$$ \begin{equation} Q=\widetilde{Q}_0+O\bigl(T(\ln T)^2+\sqrt{HT}\,\varphi(T)\ln T\bigr); \end{equation} \tag{12.7} $$
where $\varphi(T)$ is defined by
$$ \begin{equation*} \varphi(T)=\overline{r}(T)\ln T. \end{equation*} \notag $$

II. Assume that $U \ll T^{1/2}$. Then for $H \gg \sqrt{T}\,(\ln T)^2$,

$$ \begin{equation} Q=Q_0+O\bigl(T\varphi(T)\ln T+HU^{1/2}\varphi(T)\ln T\bigr), \end{equation} \tag{12.8} $$
and for $H \ll T^{1/2}(\ln T)^2$,
$$ \begin{equation} Q=Q_0+O\bigl(T(\ln T)^2+ \sqrt{T H}\,\varphi(T)\ln T\bigr). \end{equation} \tag{12.9} $$

III. Assume that one of the two sets of conditions

$$ \begin{equation} H \gg \sqrt{T}\,(\ln T)^2,\quad HU \gg T\varphi(T)\ln T,\quad U \gg \varphi^2(T)(\ln T)^2 \end{equation} \tag{12.10} $$
and
$$ \begin{equation} H \ll \sqrt{T}\,(\ln T)^2,\quad HU \gg T(\ln T)^2,\quad \sqrt{H}\,U \gg \sqrt{T}\,\varphi(T) \ln T, \end{equation} \tag{12.11} $$
is met for $U \ll T^{1/2}$. Then
$$ \begin{equation} Q \asymp HU\ln\frac{\sqrt{T}}{U}\,. \end{equation} \tag{12.12} $$

Proof. We start from the truncated Voronoi formula (5.1), according to which, for $T \leqslant x \leqslant T+H$,
$$ \begin{equation*} P(x)=-\frac{x^{1/4}}{\pi}\sum_{1 \leqslant n \leqslant T}\frac{r(n)}{n^{3/4}} \cos\biggl(2\pi \sqrt{nx}+\frac{\pi}{4}\biggr)+ O(\varphi(T)). \end{equation*} \notag $$
Using this formula we write the quantity $P(x+U)-P(x)$ as
$$ \begin{equation*} P(x+U)-P(x)=-\bigl(\mathcal{A}(x)+\mathcal{B}(x)+\mathcal{C}(x)\bigr), \end{equation*} \notag $$
where $\mathcal{A}(x)$, $\mathcal{B}(x)$, and $\mathcal{C}(x)$ are defined by
$$ \begin{equation} \begin{aligned} \, \mathcal{A}(x)&=\frac{x^{1/4}}{\pi} \sum_{1 \leqslant n \leqslant T}\frac{r(n)}{n^{3/4}} \biggl[\cos\biggl(2\pi \sqrt{n}\,\sqrt{x+U}+ \frac{\pi}{4}\biggr)-\cos\biggl(2\pi \sqrt{nx}+ \frac{\pi}{4}\biggr)\biggr], \\ \mathcal{B}(x)&=\frac{1}{\pi}\bigl((x+U)^{1/4}-x^{1/4}\bigr) \sum_{1 \leqslant n \leqslant T}\frac{r(n)}{n^{3/4}} \cos\biggl(2\pi \sqrt{n}\,\sqrt{x+U}+ \frac{\pi}{4}\biggr), \end{aligned} \end{equation} \tag{12.13} $$
and
$$ \begin{equation*} \mathcal{C}(x)=C\varphi(T). \end{equation*} \notag $$
So we have
$$ \begin{equation} Q=\sum_{i=1}^6 Q^{(i)}, \end{equation} \tag{12.14} $$
where
$$ \begin{equation} \begin{alignedat}{2} Q^{(1)}&=\int_T^{T+H}\mathcal{A}^2(x)\,dx,&\qquad Q^{(4)}&=2\int_T^{T+H}\mathcal{A}(x)\mathcal{B}(x)\,dx, \\ Q^{(2)}&=\int_T^{T+H}\mathcal{B}^2(x)\,dx,&\qquad Q^{(5)}&=2\int_T^{T+H}\mathcal{A}(x)\mathcal{C}(x)\,dx, \\ Q^{(3)}&=\int_T^{T+H}\mathcal{C}^2(x)\,dx,&\qquad Q^{(6)}&=2\int_T^{T+H}\mathcal{B}(x)\mathcal{C}(x)\,dx. \end{alignedat} \end{equation} \tag{12.15} $$
By Cauchy’s inequality,
$$ \begin{equation} Q^{(4)} \ll \sqrt{|Q^{(1)}Q^{(2)}|}\,,\quad Q^{(5)} \ll \sqrt{|Q^{(1)}Q^{(3)}|}\,,\quad\text{and}\quad Q^{(6)} \ll \sqrt{|Q^{(2)}Q^{(3)}|}\,. \end{equation} \tag{12.16} $$

Now we present without proofs a number of technical results required below (see the comments at the end of this chapter). In view of (12.16), to estimate the quantities $|Q^{(i)}|$, $i=4,5,6$, it suffices to estimate $Q^{(1)}$, $Q^{(2)}$, and $Q^{(3)}$.

The integrals involved in $Q^{(i)}$ can be expressed in terms of the functions $k_1(\lambda)$, $k_2(\lambda)$, $\omega_1(\lambda,\mu)$, and $\omega_2(\lambda,\mu)$, where

$$ \begin{equation*} \begin{gathered} \, k_1(\lambda)=\int_T^{T+H}\!\!\sqrt{x}\, \exp\{2\pi i\lambda\sqrt{x}\,\}\,dx,\qquad k_2(\lambda)=\int_T^{T+H}\!\!\sqrt{x}\, \exp\bigl\{2\pi i\lambda\sqrt{x+U}\bigr\}\,dx, \\ \omega_1(\lambda,\mu)=\int_T^{T+H}\sqrt{x}\, \exp\bigl\{2\pi i(\sqrt{x+U}+\mu\sqrt{x}\,)\bigr\}\,dx, \end{gathered} \end{equation*} \notag $$
and
$$ \begin{equation*} \omega_2(\lambda,\mu)=\int_T^{T+H}\sqrt{x}\, \exp\bigl\{2\pi i(\lambda\sqrt{x+U}-\mu\sqrt{x}\,)\bigr\}\,dx \\ (\lambda,\mu>0). \end{equation*} \notag $$

Below we require the following estimates:

$$ \begin{equation} \begin{gathered} \, |k_i(\lambda)| \leqslant \frac{4}{5}\,\frac{T}{\lambda}\quad (i=1,2);\qquad |\omega_1(\lambda,\mu)| \leqslant \frac{4}{5}\,\frac{T}{\lambda+\mu}\,; \\ \text{if } m \leqslant N, \, 1 \leqslant m < n\ \text{and}\ (\lambda,\mu)=(\sqrt{m}\,,\sqrt{n}\,)\text{ or} \\ (\lambda,\mu)=(\sqrt{n}\,,\sqrt{m}\,),\ \ \text{then}\ \ \omega_2(\lambda,\mu) \leqslant \frac{8T}{\sqrt{n}-\sqrt{m}}\,. \end{gathered} \end{equation} \tag{12.17} $$
In addition, we also use the relations
$$ \begin{equation} \begin{gathered} \, \begin{gathered} \, \begin{alignedat}{2} \sum_{n \leqslant X}r^2(n)&=4X\ln X+O(X),&&\qquad \sum_{n>X}\frac{r^2(n)}{n} \ll (\ln X)^2, \\ \sum_{n \leqslant X}\frac{r^2(n)}{n^{3/2}}&= \sum_{n>X}\frac{r^2(n)}{n^{3/2}} \ll \frac{\ln X}{\sqrt{X}}\,, \\ \sum_{n \leqslant X}\frac{r^2(n)}{\sqrt{n}} &\ll X^{1/2}\ln X,&& \end{alignedat} \\ \begin{aligned} \, \sum_{1 \leqslant n,m \leqslant X}\frac{r(n)r(m)}{n^{3/4}m^{3/4}}\, \frac{1}{\sqrt{n}+\sqrt{m}} &\ll \ln X, \\ \sum_{1 \leqslant m<n \leqslant X} \frac{r(n)r(m)}{n^{3/4}m^{3/4}}\,\frac{1}{\sqrt{n}-\sqrt{m}}&\ll (\ln X)^2. \end{aligned} \end{gathered} \end{gathered} \end{equation} \tag{12.18} $$
The quantity $Q^{(3)}$ is estimated trivially as follows:
$$ \begin{equation} Q^{(3)} \ll H\varphi^2(T)=:\overline{Q}^{(3)}. \end{equation} \tag{12.19} $$
Let us estimate $Q^{(2)}$. We have $(x+U)^{1/4}-x^{1/4} \ll UT^{-3/4}$, and so
$$ \begin{equation} \begin{aligned} \, \nonumber Q^{(2)} &\ll \frac{U^2}{T^{3/2}}\int_T^{T+H} \biggl|\,\sum_{n \leqslant T}\frac{r(n)}{n^{3/4}} \cos\biggl(2\pi \sqrt{n}\,\sqrt{x+U}+ \frac{\pi}{4}\biggr)\biggr|^2\,dx \\ &\ll \frac{U^2}{T^2}\,M(T+U,U), \end{aligned} \end{equation} \tag{12.20} $$
where
$$ \begin{equation} M(T,U)=\int_T^{T+U}\sqrt{x}\,\biggl|\,\sum_{n \leqslant T} \frac{r(n)}{n^{3/4}}\,e^{2\pi i\sqrt{nx}}\biggr|^2\,dx. \end{equation} \tag{12.21} $$
From (12.17) and (12.18) we obtain
$$ \begin{equation} M(T,U) \ll H\sqrt{T}+T(\ln T), \end{equation} \tag{12.22} $$
and therefore
$$ \begin{equation} Q^{(2)} \ll \frac{U^2}{T^2}\bigl(H\sqrt{T}+T(\ln T)^2\bigr)=: \overline{Q}^{(2)}. \end{equation} \tag{12.23} $$
To estimate $Q^{(i)}$, $i=4,5,6$, we need a rough estimate of $Q^{(1)}$. Since
$$ \begin{equation*} Q^{(1)} \ll M(T+U,U)+M(T,U), \end{equation*} \notag $$
we have (see (12.22))
$$ \begin{equation} Q^{(1)}\ll H\sqrt{T}+T(\ln T)^2=:\overline{Q}^{(1)}. \end{equation} \tag{12.24} $$
The quantity $Q^{(1)}$ makes the main contribution to the right-hand side of the two- sided estimate (12.12). Let us show that
$$ \begin{equation} Q^{(1)}=\widetilde{Q}_0+O\biggl(T(\ln T)^2+H\sqrt{U}\,\ln\frac{T}{U}\biggr). \end{equation} \tag{12.25} $$
By definition (12.15),
$$ \begin{equation} \begin{aligned} \, Q^{(1)}&=\int_T^{T+H}\sqrt{x}\,\bigl(\operatorname{Re} W(x)\bigr)^2\,dx, \\ W(x)&=\pi^{-1}e^{i\pi/4}\sum_{n \leqslant T} \frac{r(n)}{n^{3/4}}\bigl(\exp\{2\pi i\sqrt{n(X+U)}\,\}- \exp\{2\pi i\sqrt{nx}\,\}\bigr), \end{aligned} \end{equation} \tag{12.26} $$
and therefore
$$ \begin{equation*} \begin{aligned} \, W(x)&=W_1(x)+W_2(x), \\ W_1(x)&=\pi^{-1}e^{i\pi/4}\sum_{1 \leqslant n \leqslant N} \frac{r(n)}{n^{3/4}}\bigl(\exp\{2\pi i\sqrt{n(X+U)}\,\}- \exp\{2\pi i\sqrt{nx}\,\}\bigr),\end{aligned} \end{equation*} \notag $$
and
$$ \begin{equation*} W_2(x)=\pi^{-1}e^{i\pi/4}\sum_{N \leqslant n \leqslant T} \frac{r(n)}{n^{3/4}}\bigl(\exp\{2\pi i\sqrt{n(X+U)}\,\}- \exp\{2\pi i\sqrt{nx}\,\}\bigr). \end{equation*} \notag $$
Since
$$ \begin{equation} \begin{aligned} \, \nonumber (\operatorname{Re}W)^2&=\frac{1}{2}\,|W_1|^2+ \frac{1}{2}\operatorname{Re}(W_1^2)+\operatorname{Re}(W_1W_2) \\ &\qquad+\operatorname{Re}(W_1\overline{W_2}\,)+(\operatorname{Re} W_2)^2, \end{aligned} \end{equation} \tag{12.27} $$
we have
$$ \begin{equation} \begin{aligned} \, \nonumber Q^{(1)}&=\frac{1}{2}\int_T^{T+H}\sqrt{x}\,|W_1|^2\,dx+ \frac{1}{2}\int_T^{T+H}\sqrt{x}\,\operatorname{Re}(W_1^2)\,dx \\ \nonumber &\qquad+\int_T^{T+H}\sqrt{x}\,\operatorname{Re}(W_1W_2)\,dx+ \int_T^{T+H}\sqrt{x}\,\operatorname{Re}(W_1\overline{W_2}\,)\,dx \\ &\qquad+\int_T^{T+H}\sqrt{x}\,(\operatorname{Re} W_2)^2\,dx. \end{aligned} \end{equation} \tag{12.28} $$
Using (12.17) and (12.18) we find that
$$ \begin{equation} \frac{1}{2}\int_T^{T+H}\sqrt{x}\,|W_1|^2\,dx=\widetilde{Q}_0+ O\bigl(T(\ln T)^2\bigr). \end{equation} \tag{12.29} $$
The remaining terms on the right of (12.28) are estimated as follows:
$$ \begin{equation} \begin{gathered} \, \frac{1}{2}\int_T^{T+H}\sqrt{x}\,\operatorname{Re}(W_1^2)\,dx \ll T\ln T,\qquad \int_T^{T+H}\sqrt{x}\,\operatorname{Re}(W_1W_2)\,dx \ll T\ln T, \\ \int_T^{T+H}\sqrt{x}\,\operatorname{Re}(W_1\overline{W_2})\,dx \ll T(\ln T)^2, \\ \int_T^{T+H}\sqrt{x}\,(\operatorname{Re}W_2)^2\,dx \ll H\sqrt{U}\,\ln \frac{T}{U}+T(\ln T)^2. \end{gathered} \end{equation} \tag{12.30} $$
Equality (12.25) follows from (12.30) and (12.29).

By the definitions (12.3) and (12.4),

$$ \begin{equation} \widetilde{Q}_0 \ll H\sqrt{T}\quad\text{and}\quad Q_0 \ll H\sqrt{T}\,. \end{equation} \tag{12.31} $$
We have $\sqrt{x+U}-\sqrt{x}=\dfrac{1}{2}\,\dfrac{U}{\sqrt{x}}+ O\biggl(\dfrac{U^2}{x^{3/2}}\biggr)$, and so, for $U \ll T^{2/3}$,
$$ \begin{equation*} \bigl|\exp\{2\pi i \sqrt{n(x+U)}-2\pi i \sqrt{nx}\,\}\bigr|^2= \biggl|\exp\biggl\{\pi iU\sqrt{\frac{n}{x}}-1\biggr\}\biggr|^2+ O\biggl(\frac{\sqrt{n}\,U^2}{x^{3/2}}\biggr). \end{equation*} \notag $$
Therefore,
$$ \begin{equation} \widetilde{Q}_0=Q_0+O\biggl(\frac{HU^2}{T}(\ln N)^2\biggr)\qquad (H \ll T^{2/3}). \end{equation} \tag{12.32} $$

Consider the case $U \ll T^{1/2}$. We write $Q_0$ (12.4) as

$$ \begin{equation*} Q_0=\Sigma_1+\Sigma_2, \end{equation*} \notag $$
where
$$ \begin{equation*} \Sigma_1=\frac{1}{2\pi^2}\sum_{n \leqslant T/(4U^2)} \frac{r^2(n)}{n^{3/2}}\int_T^{T+H}\sqrt{x}\, \biggl|\exp\biggl\{\pi iU\sqrt{\frac{n}{x}}\,\biggr\}-1\biggr|^2\,dx, \end{equation*} \notag $$
and
$$ \begin{equation*} \Sigma_2=\frac{1}{2\pi^2}\sum_{T/(4U^2) < n \leqslant N} \frac{r^2(n)}{n^{3/2}}\int_T^{T+H}\sqrt{x}\, \biggl|\exp\biggl\{2\pi iU\sqrt{\frac{n}{x}}\,\biggr\}-1\biggr|^2\,dx. \end{equation*} \notag $$
Since
$$ \begin{equation*} \biggl|\exp\biggl\{\pi iU\sqrt{\frac{n}{x}}\,\biggr\}-1\biggr|^2= 4\sin^2\biggl(\frac{\pi}{2}\,U\sqrt{\frac{n}{x}}\,\biggr)\asymp U\sqrt{\frac{n}{x}}\qquad (n< T(4U^2)^{-1}), \end{equation*} \notag $$
we have $\Sigma_1 \asymp HU\ln\dfrac{\sqrt{T}}{U}$ . On the other hand
$$ \begin{equation*} \Sigma_2 \ll \sum_{n> T/U^2}\frac{r^2(n)}{n^{3/2}}\,H\sqrt{T} \ll UH\ln\frac{\sqrt{T}}{U}\,. \end{equation*} \notag $$
Therefore,
$$ \begin{equation} Q_0 \asymp HU\ln\frac{\sqrt{T}}{U}\qquad (U \ll T^{1/2}). \end{equation} \tag{12.33} $$
Now we are ready to complete the proof of the theorem. From (12.14) and (12.25) it follows that
$$ \begin{equation*} Q=\widetilde{Q}_0+O\biggl(T(\ln T)^2+H\sqrt{U}\,\ln\frac{T}{U}\biggr)+ \sum_{i=2}^6Q^{(i)}. \end{equation*} \notag $$
Using (12.16)) we obtain
$$ \begin{equation} Q=\widetilde{Q}_0+\Delta Q, \end{equation} \tag{12.34} $$
where
$$ \begin{equation} \begin{aligned} \, \nonumber \Delta Q &\ll T(\ln T)^2+H\sqrt{U}\,\ln\frac{\sqrt{T}}{U}+ |Q^{(2)}|+|Q^{(3)}| \\ &\qquad+|Q^{(1)}Q^{(2)}|^{1/2}+|Q^{(1)}Q^{(3)}|^{1/2}+|Q^{(2)}Q^{(3)}|^{1/2}. \end{aligned} \end{equation} \tag{12.35} $$
To estimate (12.6) and (12.7) it suffices to use (12.19), (12.23), and (12.24). This proves assertion I of the theorem.

For $U \ll T^{1/2}$ an appeal to (12.32) shows that

$$ \begin{equation} \widetilde{Q}_0=Q_0+O\bigl(H(\ln T)^2\bigr). \end{equation} \tag{12.36} $$
So, for $U \ll T^{1/2}$,
$$ \begin{equation} Q=Q_0+\Delta Q. \end{equation} \tag{12.37} $$
Hence $\Delta Q$ obeys estimate (12.35). Equalities (12.8) and (12.9) follow from the above estimates for $Q^{(i)}$, $i=1,2,3$. The two-sided estimate (12.12) is secured by equalities (12.8) and (12.9) and the two-sided estimate (12.33). This proves Theorem 12.12.

For $H=T$, $U \ll T^{1/2}$, the quantity $Q(T,U) := Q(T,U,T)$ can be evaluated explicitly, namely

$$ \begin{equation} Q(T,U)=\frac{24}{\pi^2}\,UT\ln\frac{\sqrt{T}}{U}+ O\biggl(U^2\sqrt{T}\,(\ln T)^4\ln\frac{\sqrt{T}}{U} +T\sqrt{U}\,\biggl(\ln\frac{\sqrt{T}}{U}\biggr)^2\varphi(T)\biggr). \end{equation} \tag{12.38} $$
From (12.38) it follows that
$$ \begin{equation} |P(T+U)-P(T)|=\Omega\biggl(U\ln\frac{\sqrt{T}}{U}\biggr)\qquad \bigl((\ln T)^2 \ll U \ll T^{1/2}\bigr). \end{equation} \tag{12.39} $$
In parallel with the second moment $Q=2HE_2(T,H)$ it is natural to consider the correlation function
$$ \begin{equation} \mathscr{K} \equiv \mathscr{K}(T,U,H)=\int_T^{T+H}P(x+U)P(x)\,dx. \end{equation} \tag{12.40} $$
Note that (see (12.1))
$$ \begin{equation} Q=\int_T^{T+H}P^2(x)\,dx+\int_T^{T+H}P^2(x+U)\,dx-2\mathscr{K}. \end{equation} \tag{12.41} $$
From (10.7) we obtain
$$ \begin{equation} \int_T^{T+H}P^2(x)\,dx=3BH\sqrt{T}+O\biggl(T\bigl(\ln T\bigr)^2+ \frac{H^2}{\sqrt{T}}\biggr), \end{equation} \tag{12.42} $$
and so
$$ \begin{equation} Q=6BH\sqrt{T}-2\mathscr{K}+O\biggl(T\bigl(\ln T\bigr)^2+ \frac{(H+U)^2}{\sqrt{T}}\biggr). \end{equation} \tag{12.43} $$

For $U \ll T^{1/2}$ we have the two-sided estimate (12.12). Since $Q \ll H\sqrt{T}$ for $U \ll \sqrt{T}\,\biggl(\ln\dfrac{\sqrt{T}}{U}\biggr)^{-1}$, we have

$$ \begin{equation} \mathscr{K}=\int_T^{T+H}P^2(x)\,dx\,\bigl(1+o(1)\bigr). \end{equation} \tag{12.44} $$
This means that for $U \ll T^{1/2}$ the quantities $P(x)$ and $P(x+U)$ are highly correlated.

For $U \gg T^{1/2}$ this correlation ceases to hold. Consider the quantity $k$ defined by

$$ \begin{equation} \mathscr{K}=\frac{1}{2\pi^2}\,U\sqrt{T}\,k. \end{equation} \tag{12.45} $$
From (12.42) it follows that
$$ \begin{equation} -\frac{1}{4}\,A+o(1) \leqslant k \leqslant A+o(1),\qquad A=\sum_{n=1}^\infty\frac{r^2(n)}{n^{3/2}}=50.156\ldots\,. \end{equation} \tag{12.46} $$
According to the above, for $U \ll T^{1/2}$ we have $k\approx A$. For $U \gg T^{1/2}$ the behaviour of $k=k(T,U,H)$ is more involved. In particular, if $V \gg T^{1/2}$ and $H \gg \sqrt{T}\,(\ln T)^2$, then for each $\varphi \in \{\varepsilon\}$, for
$$ \begin{equation} HV \leqslant \frac{T^{3/2}}{2\varphi(T)}\quad\text{and}\quad V \leqslant \frac{T}{8\varphi^2(T)} \end{equation} \tag{12.47} $$
there exist a constant $c=c(\varepsilon)$ and a set $E \subset [V,2V]$ of measure $\mu(E) \geqslant c(\varepsilon)V$ such that for $U \in E$,
$$ \begin{equation} \bigl|k(T,U,H)\bigr|<\varepsilon \end{equation} \tag{12.48} $$
(the condition $\varphi\in\{\varepsilon\}$ means that $\varphi$ is a non-decreasing positive function, and $\varphi(x)=O(x^\varepsilon)$). On the other hand there exist sets $E_1 \subset [V,2V]$ and $E_2 \subset [V,2V]$ of measure $\mu\{E_i\}>c(\varepsilon)V$, $i=1,2$, such that for all $U_1 \subset E_1$ and $U_2 \subset E_2$,
$$ \begin{equation} A-\varepsilon \leqslant k(T,U_1,H) \leqslant A+o(1) \end{equation} \tag{12.49} $$
and
$$ \begin{equation} -\frac{3}{4}\,A+o(1) \leqslant k(T,U_2,H) \leqslant -\frac{3}{4}\,A+\varepsilon. \end{equation} \tag{12.50} $$

13. Modified Jutila integral

The modified Jutila integral is the second local moment of

$$ \begin{equation*} \max_{v \leqslant U}|P(x+v)-P(x)|. \end{equation*} \notag $$
So below we consider the quantity
$$ \begin{equation} Q_{\rm M} \equiv Q_{\rm M}(T,U,H)= \int_T^{T+H}\max_{0 \leqslant v \leqslant U}|P(x+v)-P(x)|^2\,dx. \end{equation} \tag{13.1} $$
As in § 12, we assume that
$$ \begin{equation*} H \leqslant T,\quad 1 \ll U \ll T,\quad T \gg 1,\quad\text{and}\quad T<x<T+U. \end{equation*} \notag $$

Theorem 13. Let $U \ll T^{1/2}$ and $U \gg T^{1/4}(\ln T)^{-1}$ for $H \gg T^{1/2}(\ln T)^2$. Then for all such $T$, $U$, and $H$

$$ \begin{equation} Q_{\rm M} \ll HU(\ln T)^3+\overline{r}^2(T)H^{1/3}(TU)^{2/3} \end{equation} \tag{13.2} $$
and for $H=T$ and $U \gg \overline{r}^6(T)(\ln T)^{-9}$,
$$ \begin{equation} Q_{\rm M} \leqslant C_{\rm M} TU(\ln T)^3. \end{equation} \tag{13.3} $$

Proof. Assume that
$$ \begin{equation} \max_{0 \leqslant v \leqslant U}|P(x+v)-P(x)|=|P(x+v_0(x))-P(x)|. \end{equation} \tag{13.4} $$
Let $\lambda$ and $b$ be such that
$$ \begin{equation} U=2^\lambda b,\quad b \geqslant 1,\quad \lambda \in \mathbb{Z}_+. \end{equation} \tag{13.5} $$
Let $j_0 \equiv j_0(x) \in \mathbb{Z}$ be such that
$$ \begin{equation} j_0 b \leqslant v_0 \leqslant (j_0+1)b,\qquad v_0 \equiv v_0(x) \in \mathbb{Z}. \end{equation} \tag{13.6} $$
We express $j_0$ in the form
$$ \begin{equation} \begin{gathered} \, j_0=2^{\lambda-\mu_1}+2^{\lambda-\mu_2}+\cdots+2^{\lambda-\mu_l}, \\ 0 \leqslant \mu_1 < \mu_2 <\cdots< \mu_l \leqslant \lambda,\qquad \mu_k \equiv\mu_k(x). \end{gathered} \end{equation} \tag{13.7} $$
Since $\overline{r}(x)$ is slowly varying, we have
$$ \begin{equation} \max_{0 \leqslant v \leqslant U}|P(x+v)-P(x)|^2\ll |P(x+j_0b)-P(x)|^2+\overline{r}^2(T)b^2. \end{equation} \tag{13.8} $$
We set
$$ \begin{equation} \sum_{\mu \in S}g(\mu) =: g(\mu_1)+g(\mu_2)+\cdots+g(\mu_l) \end{equation} \tag{13.9} $$
and define
$$ \begin{equation} \begin{gathered} \, n_{\mu_1}=0,\quad n_{\mu_2}=2^{\mu_2-\mu_1},\quad n_{\mu_3}=2^{\mu_3-\mu_1}+2^{\mu_3-\mu_2},\quad\ldots\,, \\ n_{\mu_l}=2^{\mu_l-\mu_1}+2^{\mu_l-\mu_2}+\cdots+2^{\mu_l-\mu_{l-1}}. \end{gathered} \end{equation} \tag{13.10} $$
Then for any function $g$ we have the expansion
$$ \begin{equation} g(x+j_0b)-g(x)=\sum_{\mu \in S} \bigl[g\bigl(x+(n_\mu+1)\,2^{\lambda-\mu}b\bigr)- g(x+n_\mu\,2^{\lambda-\mu}b)\bigr]. \end{equation} \tag{13.11} $$
To prove (13.11) it suffices to note that by (13.9) we have
$$ \begin{equation*} j_0=2^{\lambda-\mu_l}(n_{\mu_l}+1) \end{equation*} \notag $$
and
$$ \begin{equation*} 2^{\lambda-\mu_l}n_l=2^{\lambda-\mu_{l-1}}(n_{l-1}+1) \end{equation*} \notag $$
and to write $g(j_0)$ as
$$ \begin{equation*} g(j_0)=\bigl[g\bigl(2^{\lambda-\mu_l}(n_{l}+1)\bigr)- g(2^{\lambda-\mu_l}n_l)\bigr]+g(2^{\lambda-\mu_l}n_l). \end{equation*} \notag $$
In view of this representation equality (13.11) can be proved by induction on $l$. Applying (13.11)) we have
$$ \begin{equation} |P(x+j_0b)-P(x)|=\sum_{\mu \in S} \bigl|P\bigl(x+(n_\mu+1)\, 2^{\lambda-\mu}b\bigr)- P(x+n_\mu\, 2^{\lambda-\mu}b)\bigr|. \end{equation} \tag{13.12} $$
Using Cauchy’s inequality
$$ \begin{equation*} \sum_{j=1}^N x_j \leqslant \biggl(\,\sum_{i=1}^N x_i^2\biggr)^{1/2}N^{1/2}, \end{equation*} \notag $$
we find that
$$ \begin{equation} |P(x+j_0b)-P(x)|^2 \leqslant \lambda\sum_{\mu \in S} \bigl|P\bigl(x+(n_\mu+1)\, 2^{\lambda-\mu}b\bigr)- P(x+n_\mu\, 2^{\lambda-\mu}b)\bigr|^2. \end{equation} \tag{13.13} $$
Hence
$$ \begin{equation} Q_{\rm M}< \lambda\sum_{\mu \in S}\int_T^{T+H} \bigl|P\bigl(x+(n_\mu+1)\,2^{\lambda-\mu}b\bigr)- P(x+n_\mu\,2^{\lambda-\mu}b)\bigr|^2\,dx+Hb^2\overline{r}^2(T). \end{equation} \tag{13.14} $$
From this estimate we obtain
$$ \begin{equation} Q_{\rm M} \ll \lambda\sum_{0 \leqslant \mu \leqslant \lambda}\, \sum_{n \leqslant 2^{\lambda+1}}Q_{n\mu}+H b^2\overline{r}^2(T), \end{equation} \tag{13.15} $$
where
$$ \begin{equation} \begin{gathered} \, Q_{n\mu}=Q(T_1,U_1,H), \\ T_1=T+n\, 2^{\lambda-\mu}b,\qquad U_1=2^{\lambda-\mu}b, \\ Q(T_1,U_1,H)=\int_T^{T+H}[P(x+U_1)-P(x)]^2\,dx. \end{gathered} \end{equation} \tag{13.16} $$
We have $T_1 \leqslant T+2U$ amd $U_1 \leqslant U$, and therefore estimate (12.8) applies.

Under the conditions

$$ \begin{equation} b>\varphi^2(T)\quad\text{and}\quad H \ll T^{1/2}(\ln T)^2, \end{equation} \tag{13.17} $$
we have
$$ \begin{equation} Q_{n\mu} \ll HU_1 \ln T+T(\ln T)^2. \end{equation} \tag{13.18} $$
Since
$$ \begin{equation*} \sum_{0\leqslant \mu \leqslant \lambda}\, \sum_{0 \leqslant n \leqslant 2^{\mu+1}}1 \ll \frac{U}{b} \end{equation*} \notag $$
and
$$ \begin{equation*} \sum_{0\leqslant \mu \leqslant \lambda}\, \sum_{0 \leqslant n \leqslant 2^{\mu+1}}U_1 \ll \lambda U, \end{equation*} \notag $$
it follows from (13.16) and (13.18) that
$$ \begin{equation} Q_{\rm M} \ll \lambda^2 (\ln T)HU+T(\ln T)^2\,\frac{U}{b}+ Hb^2\overline{r}^2(T). \end{equation} \tag{13.19} $$
Let $b$ satisfy the equality $TUb^{-1}=Hb^2$, that is, let
$$ \begin{equation} b=\biggl(\frac{TU}{H}\biggr)^{1/3}. \end{equation} \tag{13.20} $$
Since $U \gg T^{1/4}(\ln T)^{-1}$, we have $b>\varphi^2(T)$ and $Ub^{-1} \gg 1$; hence
$$ \begin{equation} Q_{\rm M} \ll \ln T(\ln U)^2HU+\overline{r}^2(T)H^{2/3}(TU)^{1/3} \end{equation} \tag{13.21} $$
if $T^{1/4}(\ln T)^{-1} \ll U \ll T^{1/2}$ and $H \gg T^{1/2}(\ln T)^2$. If $H \ll T^{1/2}(\ln T)^2$, then by choosing $b$ from condition (13.17) and using (12.9) we arrive at the same estimate (13.21). This proves Theorem 13.13.

14. Estimates for the quantities $|P(T)-P(x)|$ and $|P(x)|$

A relation between these quantities was already considered in § 11. Our aim in the present section is to prove the following result.

Theorem 14. I. Assume that for $T \leqslant x \leqslant T+U$ ($U \ll T$)

$$ \begin{equation} |P(T)-P(x)| \ll T^\alpha U^\beta, \qquad \alpha,\beta>0. \end{equation} \tag{14.1} $$
Then
$$ \begin{equation} |P(T)| \ll T^{(\alpha+3\beta/4)/(1+\beta)}\quad\textit{for } \alpha<\frac{3}{4} \end{equation} \tag{14.2} $$
and
$$ \begin{equation} |P(T)| \ll T^{(\alpha+\beta)/(1+2\beta)}\quad\textit{for } 4\alpha+2\beta>1\quad (U \ll T^{1/2}). \end{equation} \tag{14.3} $$

II. Assume that for $T \leqslant x \leqslant T+U$ and $U \geqslant T^{1/2-\gamma}$ ($\gamma=\gamma(T)>0$)

$$ \begin{equation} |P(x)-P(T)|<B|P(T)|,\qquad B<1\quad (B=B(T)). \end{equation} \tag{14.4} $$
Then
$$ \begin{equation} |P(T)| \leqslant \frac{C}{1-B}\,T^{1/4+\gamma/2}. \end{equation} \tag{14.5} $$
In addition, if condition (14.4) holds for $U=T^{1/2}/\psi(T)$ ($\psi \in \{\varepsilon\}$), then
$$ \begin{equation} |P(T)| \leqslant \frac{C}{1-B}\,T^{1/4}\psi^{1/2}(T). \end{equation} \tag{14.6} $$

Proof. Consider the identity
$$ \begin{equation} P(T)=\frac{1}{U}\int_T^{T+U}P(x)\,dx+\frac{1}{U}\int_T^{T+U}[P(T)-P(x)]\,dx. \end{equation} \tag{14.7} $$
From (10.2) it follows that (see (14.11))
$$ \begin{equation} |P(T)| \leqslant C_1\sqrt{\frac{T}{U}}+T^\alpha U^\beta \qquad (U \ll T^{1/2}) \end{equation} \tag{14.8} $$
and
$$ \begin{equation} |P(T)|\leqslant C_2\,\frac{T^{3/4}}{U}+T^\alpha U^\beta \qquad (U \ll T). \end{equation} \tag{14.9} $$
Let $U$ satisfy $T^{3/4}U^{-1}=T^\alpha U^\beta$, that is, let
$$ \begin{equation*} U=T^{(3/4-\alpha)/(1+\beta)}\qquad \Bigl(\alpha<\frac{3}{4}\Bigr). \end{equation*} \notag $$
Then estimate (14.2) follows from (14.9). It is non-trivial if
$$ \begin{equation} \frac{\alpha+3\beta/4}{1+\beta}<\frac{1}{3}\,,\quad\text{that is}, \quad 3\alpha+\frac{5}{12}\,\beta<\frac{1}{3}\,. \end{equation} \tag{14.10} $$
Consider the case $U \ll T^{1/2}$. Choosing $U$ from the condition $T^{1/2}U^{-1/2}=T^\alpha U^\beta$, we have
$$ \begin{equation} U=T^{(1-2\alpha)/(1+2\beta)}. \end{equation} \tag{14.11} $$
Note that $(1-2\alpha)/(1+2\beta)< 1/2$ as $4\alpha+2\beta>1$. Now estimate (14.3) follows from (14.7); this estimate is non-trivial if
$$ \begin{equation} \frac{\alpha+\beta}{1+2\beta}<\frac{1}{3}\,,\quad\text{that is}, \quad 3\alpha+\beta<1. \end{equation} \tag{14.12} $$
According to Jutila’s conjecture, if $T<x<T+U$, then
$$ \begin{equation} |P(T)-P(x)| \ll T^\varepsilon U^{1/2}\quad\text{for } T^\varepsilon<U< T^{1/2-\varepsilon}. \end{equation} \tag{14.13} $$
It follows from (14.3) that in this case $|P(T)|\ll T^{1/4+\varepsilon}$, that is, the solution of the circle problem follows from Jutila’s conjecture.

Let us prove assertion II. We have $U \ll T^{1/2}$, and now from (14.7) and (10.2) it follows that

$$ \begin{equation} |P(T)| \ll \sqrt{\frac{T}{U}}+\frac{1}{U}\int_T^{T+U}|P(x)-P(T)|\,dx. \end{equation} \tag{14.14} $$
Hence by (14.4)
$$ \begin{equation} |P(T)| \ll \sqrt{\frac{T}{U}}+B|P(T)|. \end{equation} \tag{14.15} $$
Estimates (14.5) and (14.6) are direct corollaries of (14.15). This proves Theorem 14.14.

At present, there is no proof of estimate (14.1) for $\alpha$ and $\beta$ satisfying condition (14.10) or (14.12). Let us show that for $U \ll T^{3/5}$,

$$ \begin{equation} |P(T+U)-P(T)| \ll U^{1/4}T^{1/4}\psi(T),\qquad \psi \in \{\varepsilon\}. \end{equation} \tag{14.16} $$
Note that in this case we have the inequality $3\alpha+\beta \geqslant 1+\varepsilon$, and the estimate for $|P(x)|$, which follows from (14.16), is trivial.

From the truncated Voronoi formula (5.1) it follows that

$$ \begin{equation} \begin{aligned} \, P(T+U)-P(T)&=-\frac{1}{\pi}\sum_{j=1}^N\frac{r(j)}{j^{3/4}} \int_T^{T+U}\frac{d}{dy}\biggl(y^{1/4}\cos\biggl(2\pi \sqrt{jy}+ \frac{\pi}{4}\biggr)\biggr)\,dy \\ &\qquad+\Delta_NP(T,U), \\ \Delta_NP(T,U)&=\Delta_NP(T+U)-\Delta_NP(T). \end{aligned} \end{equation} \tag{14.17} $$
Since
$$ \begin{equation*} \begin{aligned} \, &\frac{d}{dy}\biggl(y^{1/4}\cos\biggl(2\pi \sqrt{jy}+ \frac{\pi}{4}\biggr)\biggr)=\frac{1}{4}\,y^{-3/4} \cos\biggl(2\pi \sqrt{jy}+\frac{\pi}{4}\biggr) \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\pi y^{-1/4}\sqrt{j}\,\sin\biggl(2\pi \sqrt{jy}+ \frac{\pi}{4}\biggr) \end{aligned} \end{equation*} \notag $$
and
$$ \begin{equation*} \sum_{j \leqslant N}\frac{r(j)}{j^{3/4}}\int_T^{T+U}y^{-3/4} \cos\biggl(2\pi \sqrt{jy}+\frac{\pi }{4}\biggr)\,dy\ll \frac{U}{T^{3/4}}\,N^{1/4}, \end{equation*} \notag $$
we have
$$ \begin{equation} \begin{gathered} \, P(T+U)-P(T)=\int_T^{T+U}y^{-1/4}f(y)\,dy+ O\biggl(\frac{U}{T^{3/4}}\,N^{1/4}\biggr)+\Delta_N P(T,U), \\ f(y)=\sum_{j=1}^N\frac{r(j)}{j^{1/4}}\sin \biggl(2\pi\sqrt{jy}+ \frac{\pi }{4}\biggr). \end{gathered} \end{equation} \tag{14.18} $$
By Cauchy’s inequality, from (14.18) we obtain
$$ \begin{equation} \begin{gathered} \, |P(T+U)-P(T)| \ll \frac{U^{1/2}}{T^{1/4}}\,R_1^{1/2}+ \frac{UN^{1/4}}{T^{3/4}}+\Delta_N P(T,U), \\ R_1=\int_T^{T+U} f^2(y)\,dy. \end{gathered} \end{equation} \tag{14.19} $$
We write $R_1$ in the form
$$ \begin{equation} \begin{aligned} \, \nonumber R_1&=\sum_{j=1}^N\frac{r^2(j)}{\sqrt{j}}\int_T^{T+U} \sin^2\biggl(2\pi \sqrt{jy}+\frac{\pi }{4}\biggr)\,dy \\ &\qquad+2\sum_{1 \leqslant j<l \leqslant N}\frac{r(j)r(l)}{j^{1/4}\,l^{1/4}} \int_T^{T+U}\sin\biggl(2\pi \sqrt{ly}+\frac{\pi }{4}\biggr) \sin\biggl(2\pi \sqrt{jy}+\frac{\pi }{4}\biggr)\,dy. \end{aligned} \end{equation} \tag{14.20} $$
Since
$$ \begin{equation} \int_T^{T+U}\sin\biggl(2\pi \sqrt{jy}+\frac{\pi }{4}\biggr) \sin\biggl(2\pi \sqrt{ly}+\frac{\pi }{4}\biggr)\,dy\ll \frac{\sqrt{T}}{\sqrt{l}-\sqrt{j}}\,, \end{equation} \tag{14.21} $$
we have
$$ \begin{equation} R_1 \ll U\sum_{j \leqslant N}\frac{r^2(j)}{\sqrt{j}}+\sqrt{T}\, \sum_{1 \leqslant j<l \leqslant N} \frac{r(j)r(\ell)}{(\sqrt{l}-\sqrt{j}\,)j^{1/4}\,l^{1/4}}\,. \end{equation} \tag{14.22} $$
The sums on the right-hand side of (14.22) can easily be estimated. We have
$$ \begin{equation*} \sum_{j \leqslant N}\frac{r^2(j)}{\sqrt{j}} \ll N^{1/2}\ln N \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, \sum_{1 \leqslant j<l \leqslant N} \frac{r(j)r(l)}{(\sqrt{l}-\sqrt{j}\,)j^{1/4}\,l^{1/4}} &=\sum_{1 \leqslant j<l \leqslant N} \frac{r(j)r(\ell)j^{1/2}\,l^{1/2}}{(\sqrt{l}-\sqrt{j}\,)j^{3/4}\,l^{3/4}} \\ &\ll N\sum_{1 \leqslant j<l \leqslant N} \frac{r(j)r(l)}{(\sqrt{l}-\sqrt{j}\,)j^{3/4}l^{3/4}}\ll N(\ln N)^2 \end{aligned} \end{equation*} \notag $$
(in the last estimate we used the last relation in (12.18)). Using these results we have
$$ \begin{equation} R_1 \ll UN^{1/2}\ln N+\sqrt{T}\,N(\ln N)^2. \end{equation} \tag{14.23} $$
Therefore,
$$ \begin{equation} R_1 \ll \sqrt{T}\,N(\ln N)^2 \end{equation} \tag{14.24} $$
under the condition that
$$ \begin{equation} U<\sqrt{TN}\,. \end{equation} \tag{14.25} $$
An application of estimate (14.24) to (14.19) shows that
$$ \begin{equation} |P(T+U)-P(T)| \ll U^{1/2}N^{1/2}\ln N+\frac{UN^{1/4}}{T^{3/4}}+ \Delta_NP(T,U). \end{equation} \tag{14.26} $$
For $U \ll T$ (see (14.17)) we have
$$ \begin{equation} \Delta_NP(T,U) \ll \sqrt{\frac{T}{N}}\,\overline{r}(N), \end{equation} \tag{14.27} $$
and so by (14.25)
$$ \begin{equation} |P(T+U)-P(T)| \ll U^{1/2}N^{1/2}\ln N+\sqrt{\frac{T}{N}}\,\overline{r}(N). \end{equation} \tag{14.28} $$
Choosing $N$ from the condition $U^{1/2}N^{1/2}=\sqrt{T/N}$ we find that
$$ \begin{equation} N=\sqrt{\frac{T}{U}}\,. \end{equation} \tag{14.29} $$
From (14.25) it follows that
$$ \begin{equation} U<T^{3/5}, \end{equation} \tag{14.30} $$
and by (14.28), for $U < T^{3/5}$ we have
$$ \begin{equation} |P(T+U)-P(T)| \ll U^{1/4}\,T^{1/4}\,\overline{r}(T). \end{equation} \tag{14.31} $$
This proves estimate (14.16).

15. Behaviour of $P(x)$ outside the strip where $|P(x)|<Cx^{1/4}$

In this section we study the behaviour of $P(x)$ on the set $S \subset [T,2T]$ such that

$$ \begin{equation} |P(x)|>Cx^{1/4}\qquad (x \in S). \end{equation} \tag{15.1} $$
Let $x_0 \in S$ be a local maximum point of $|P(x)|$. This maximum is said to be high if $|P(x_0)|> \eta T^{1/4}$ ($\eta>1$), and it is broad if the inequality
$$ \begin{equation} |P(x)-P(x_0)|<B|P(x_0)|\qquad (B < 1) \end{equation} \tag{15.2} $$
holds for
$$ \begin{equation} |x-x_0|=T^{1/2-\varepsilon}. \end{equation} \tag{15.3} $$
We showed above (see Theorem 14.14) that in this case $|P(x)|<CT^{1/4+\varepsilon}$. In what follows $|U|$ denotes the length of the interval $U$.

Theorem 15. There exist sets $V^\pm \subset S$ and quantities $\delta_0>0$ and $\lambda>1$ such that

$$ \begin{equation} V^\pm=\bigcup_\alpha U_\alpha^\pm,\qquad U_\alpha^\pm=[x_\alpha^\pm,x_\alpha^\pm+|U_\alpha^\pm|]. \end{equation} \tag{15.4} $$
In addition, the intervals $U_\alpha^\pm$ are disjoint,
$$ \begin{equation} |U_\alpha^\pm| \leqslant \frac{C_1}{\lambda}\,T^{1/2}(\ln T)^{-3} \end{equation} \tag{15.5} $$
and for all $\delta<\delta_0$,
$$ \begin{equation} P(x)>C_2\sqrt{\delta}\,T^{1/4}\quad (x \in U_\alpha^+),\qquad P(x)<-C_2\sqrt{\delta}\,T^{1/4}\quad (x \in U_\alpha^-), \end{equation} \tag{15.6} $$
$$ \begin{equation} |P(x_\alpha^\pm+v)-P(x_\alpha^\pm)|<\lambda^{-1/2}|P(x_\alpha^\pm)|\quad (v \leqslant |U_\alpha^\pm|), \end{equation} \tag{15.7} $$
$$ \begin{equation} \mu\{V^\pm\}>C_3T\quad ( \mu\{V^\pm\}\textit{ is the Lebesgue measure}), \end{equation} \tag{15.8} $$
and
$$ \begin{equation} |P(x)|<C_4\,\frac{\sqrt{\lambda}+1}{\sqrt{\lambda}-1}\, T^{1/4}(\ln T)^{3/2}\quad (x \in U_\alpha^\pm). \end{equation} \tag{15.9} $$
All constants $C_i$ are absolute. These constants, as well as $\lambda$ and $\delta_0$, can be indicated explicitly.

Proof. Let us construct the set $V^+$. Consider the quantity
$$ \begin{equation} P_+(x)=\begin{cases} P(x), & P(x)>0, \\ 0, & P(x)<0. \end{cases} \end{equation} \tag{15.10} $$
We show that
$$ \begin{equation} \int_T^{2T}P_+^2(x)\,dx \geqslant C_+ T^{3/2}. \end{equation} \tag{15.11} $$
According to § 6,
$$ \begin{equation} C'_2 T^{3/2} \leqslant \int_T^{2T} P^2(x)\,dx \leqslant C''_2 T^{3/2}\quad\text{and}\quad \int_T^{2T} P^4(x)\,dx \leqslant C_4 T^2. \end{equation} \tag{15.12} $$
Since
$$ \begin{equation*} \int_T^{2T} P^2(x)\,dx \leqslant \biggl(\int_T^{2T}|P(x)|^3\,dx\biggr)^{1/2} \biggl(\int_T^{2T}|P(x)|\,dx\biggr)^{1/2} \end{equation*} \tag{15.13} $$
and
$$ \begin{equation*} \int_T^{2T}|P(x)|^3 \leqslant \biggl(\int_T^{2T}P^4(x)\,dx\biggr)^{1/2} \biggl(\int_T^{2T}P^2(x)\,dx\biggr)^{1/2}, \end{equation*} \tag{15.14} $$
we have the estimate
$$ \begin{equation} \int_T^{2T}|P(x)|\,dx \gg T^{5/4}. \end{equation} \tag{15.15} $$
By (6.2)
$$ \begin{equation*} \biggl|\int_T^{2T}P(x)\,dx\biggr| \ll T^{3/4}. \end{equation*} \notag $$
On the other hand
$$ \begin{equation} \int_T^{2T}|P(x)|\,dx=2\int_T^{2T}P_+(x)\,dx-\int_T^{2T}P(x)\,dx \end{equation} \tag{15.16} $$
and therefore
$$ \begin{equation*} \int_T^{2T}P_+(x)\,dx \gg \int_T^{2T}|P(x)|\,dx \gg T^{5/4}. \end{equation*} \notag $$
We have
$$ \begin{equation*} \int_T^{2T}|P_+(x)|\,dx \leqslant T^{1/2}\biggl(\int_T^{2T}P_+^2(x)\,dx\biggr)^{1/2}, \end{equation*} \notag $$
and now estimate (15.11) follows from (15.15).

Consider the quantity

$$ \begin{equation} W_+(x)=P_+^2(x)-\lambda\max_{0 \leqslant v \leqslant U}|P(x+v)-P(x)|^2- \delta x^{1/2},\qquad \lambda>0, \end{equation} \tag{15.17} $$
and set
$$ \begin{equation} S_+=\{x \in [T,2T]\colon W_+(x)>0\}. \end{equation} \tag{15.18} $$
If $x \in S_+$, then
$$ \begin{equation*} P_+(x)>\sqrt{\delta}\,x^{1/4}. \end{equation*} \notag $$
Let us estimate the quantity $|S_+|=\mu\{S_+\}$. Note that
$$ \begin{equation*} \begin{aligned} \, \int_T^{2T}W_+(x)\,dx&<\int_{S_+}W_+(x)\,dx<\int_{S_+}P_+^2(x)\,dx \\ &<|S_+|^{1/2}\biggl(\,\int_{S_+}P_+^4(x)\,dx\biggr)^{1/2}< |S_+|^{1/2}\biggl(\,\int_T^{2T}P^4(x)\,dx \biggr)^{1/2}. \end{aligned} \end{equation*} \notag $$
Since $\displaystyle\int_T^{2T}P^4(x)\,dx<C_4 T^2$, we have
$$ \begin{equation} \int_T^{2T}W_+(x)\,dx\leqslant |S_+|^{1/2}C_4^{1/2}T. \end{equation} \tag{15.19} $$
By the definition (15.17) of $W_+(x)$,
$$ \begin{equation*} \begin{gathered} \, \int_T^{2T}W_+(x)\,dx=\int_T^{2T}P_+^2(x)\,dx- \lambda\int_T^{2T}\max_{0\leqslant v\leqslant U}|P(x+v)-P(x)|^2\,dx- a\delta T^{3/2}, \\ a=\frac{2}{3}(2^{3/2}-1). \end{gathered} \end{equation*} \notag $$
Using estimates (13.3) and (15.11)), we have (for $U<T^{1/2}$)
$$ \begin{equation} \int_T^{2T}W_+(x)\,dx>C_+T^{3/2}-\lambda C_{\rm M} TU(\ln T)^3- \delta aT^{3/2}. \end{equation} \tag{15.20} $$
Let $\delta$ and $\lambda$ satisfy the conditions
$$ \begin{equation} \begin{gathered} \, \lambda C_{\rm M} TU(\ln T)^3 < \frac{C_+}{4}\,T^{3/2}, \\ \delta \leqslant \delta_0=\frac{1}{4a}\,C_+,\quad\text{where } \lambda>1\quad\text{and}\quad U<T^{1/2}. \end{gathered} \end{equation} \tag{15.21} $$
Then we have
$$ \begin{equation} U \leqslant \frac{C_+}{C_{\rm M}}\,\frac{1}{4\lambda}\, \frac{\sqrt{T}}{(\ln T)^3}\,. \end{equation} \tag{15.22} $$
For this choice of $\delta$ and $\lambda$, from (15.19) we obtain
$$ \begin{equation} \int_T^{2T}W_+(x)\,dx>\frac{1}{2}\,C_+ T^{3/2}, \end{equation} \tag{15.23} $$
and now the required estimate
$$ \begin{equation} |S_+|>\frac{1}{4}\,\frac{C_+^2}{C_4}\,T \end{equation} \tag{15.24} $$
follows from (15.18). The set $V^+$ is constructed as follows. Let $x_\alpha^+$ be a point in $S_+$ such that $x_\alpha^+-0 \notin S_\alpha$ and $|U_\alpha^+|=U$ (see (15.21)). We claim that the set
$$ \begin{equation*} V^+=\bigcup_\alpha U_\alpha^+,\qquad U_\alpha^+=\bigl[x_\alpha^+,x_\alpha^++|U_\alpha^+|\bigr], \end{equation*} \notag $$
satisfies the conditions of Theorem 15.15. Indeed, from (15.16) we have
$$ \begin{equation} P(x_\alpha^+)>\sqrt{\lambda}\,|P(x_\alpha^++v)-P(x_\alpha^+)|,\qquad 0 \leqslant v \leqslant |U_\alpha^+|. \end{equation} \tag{15.25} $$
Consequently,
$$ \begin{equation} |P(x_\alpha^++v)-P(x_\alpha)|<\lambda^{-1/2}P(x_\alpha^+),\qquad 0 \leqslant v \leqslant |U_\alpha^+|, \end{equation} \tag{15.26} $$
and therefore $x=x_\alpha^++v \in S_+$.

It follows from (15.16) that $P(x)>\sqrt{\delta}\,x^{1/4}$ for $x \in S_+$. We have

$$ \begin{equation} (1-\lambda^{-1/2})P(x_\alpha)<P(x)<(1+\lambda^{-1/2})P(x_\alpha),\qquad x \in V^+, \end{equation} \tag{15.27} $$
and now the estimate
$$ \begin{equation} P(x)<C\,\frac{\sqrt{\lambda}+1}{\sqrt{\lambda}-1}\,T^{1/4}(\ln T)^{3/2},\qquad x \in V^+, \end{equation} \tag{15.28} $$
follows from (14.16). Thus, the set $V^+$ is constructed. To construct $V^-$ one proceeds similarly and considers $\widetilde{P}(x)=-P(x)$. This proves Theorem 15.15.

By (15.7) all local maxima of $|P(x)|$ for $x \in V^+ \cup V^-$ are broad, and for all $x \in V^+ \cup V^-$ estimate (14.6) holds, that is, the circle problem is solved on the set $V^+ \cup V^-$. In addition, estimate (15.9) holds on this set.

Theorems 8.8 and 15.15 suggest the following conjectures on the behaviour of the quantity $P(x)$ for $x \in [T,2T]$. Let $S$ and $\overline{S}$ be sets such that

$$ \begin{equation} P(x)\geqslant CT^{1/4}\quad (x \in S),\quad |P(x)|< CT^{1/4}\quad (x \in \overline{S}),\quad\text{and}\quad S \cup \overline{S}=[T,2T]. \end{equation} \tag{15.29} $$
For definiteness we assume that $P(T) \geqslant CT^{1/4}$.

Conjecture. The set $S$ can be written in the form

$$ \begin{equation} S=S_+^{(1)} \cup S_-^{(1)} \cup S_+^{(2)} \cup S_-^{(2)} \cup\cdots\cup S_\pm^{(N)}, \end{equation} \tag{15.30} $$
where the $S_\pm^{(i)}$, $i=1,2,\dots,N$, are disjoint intervals arranged in the increasing order of $x$, and
$$ \begin{equation} \begin{gathered} \, P(x)<CT^{1/4}(\ln T)^{3/2}\qquad (x \in S_+^{(i)}), \\ |P(x)|<CT^{1/4}(\ln T)^{3/2},\quad P(x)<0 \qquad (x \in S_-^{(i)}), \\ d \asymp \sqrt{T}\,, \end{gathered} \end{equation} \tag{15.31} $$
where $d$ is the distance between the centres of neighbouring intervals in the expansion (15.30) and $N=C\sqrt{T}$ .

Comments to Chapter III

1. Estimate (10.2) was proved in [46].

2. The fourth local moment was considered in [47], where estimate (10.4) was proved. It was also shown in [47] that $E_4(T,H) \ll T^{1+\varepsilon}$ for $H \geqslant T^{2/3}$. Our proof of (10.4) follows mainly the arguments in [47]. The existence of a function $\varphi(x)$ (see (10.24)) was verified in [48].

Estimate (10.50) was proved in [49].

3. In § 11 we presented the result of [46].

4. The integral $\displaystyle\int_T^{T+H}[\Delta(x+u)-\Delta(x)]^2\,dx$ was considered in [49], where it was shown that for $1 \ll U \ll T^{1/2} \ll H \leqslant T$,

$$ \begin{equation*} \begin{aligned} \, \int_T^{T+H}[\Delta(x+U)-\Delta(x)]^2\,dx&=\frac{1}{4\pi^2}\int_T^{T+H}x^{1/2} \biggl|\exp\biggl\{2\pi iU\sqrt{\frac{x}{n}}\,\biggr\}-1\biggr|^2\,dx \\ &\qquad+O(T^{1+\varepsilon}+H\sqrt{U}\,T^\varepsilon), \end{aligned} \end{equation*} \notag $$
and for $HU \gg T^{1+\varepsilon}$, $T^\varepsilon \ll U \leqslant \sqrt{T}/2$, the two-sided estimate
$$ \begin{equation*} \int_T^{T+H}[\Delta(x+U)-\Delta(x)]^2\,dx \asymp HU\ln\frac{\sqrt{T}}{U} \end{equation*} \notag $$
holds. Our proof of Theorem 12.12 follows the arguments of [50]. Estimate (15.19) and relation (15.20) were proved in [50].

5. Equality (12.38) (although with a less sharp estimate for the remainder term) was established in [51].

6. The behaviour of the correlation function for $U \gg T^{1/2}$ was studied in [52].

7. The second local moment of the quantity $\max_{v \leqslant U}|\Delta(x+v)-\Delta(x)|$ was considered in [41], where it was shown that

$$ \begin{equation*} \int_T^{2T}\max_{v \leqslant U}|\Delta(x+v)-\Delta(x)|^2\,dx \ll TU(\ln T)^5. \end{equation*} \notag $$
Our proof of Theorem 13.13 follows [41] and [51].

8. Theorem 14.14 is essentially contained in [46] (see also [7]).

9. The estimate $|\Delta(x+U)-\Delta(x)| \ll x^{1/4+\varepsilon}U^{1/4}$ ($1 \ll U \ll x^{3/5}$) was proved in [53]. Our proof of estimates (14.16) and (14.31) follows the approach of [53].

10. It was shown in [41] that there exist disjoint intervals $U_\alpha \subset [T,2T]$ of width $|U_\alpha| \sim \sqrt{T}\,(\ln T)^{-5}$ such that for $x \in U_\alpha$,

$$ \begin{equation*} |\Delta(x)|>CT^{1/4}. \end{equation*} \notag $$
Our proof of Theorem 15.15 follows the arguments of [41] and [53]. Note that no upper estimate of $|\Delta(x)|$ for $x \in U_\alpha$ was proved in those papers. A possibility of such an estimate was mentioned in [7] and [46].

11. The paper [54] presents the results of a numerical experiment aimed at verifying the conjecture that all sufficiently high maxima are broad. Let $x_\alpha \in [T,2T]$ be a local maximum of $|P(x)|$, and let $|P(x_\alpha)|=:h_\alpha$. The width $u_\alpha$ of this maximum is defined from the condition that

$$ \begin{equation*} |P(x_\alpha)|>\dfrac{1}{2}h_\alpha\quad\text{for}\ \ |x-x_\alpha|<u_\alpha. \end{equation*} \notag $$
By Theorem 14.14 we have
$$ \begin{equation*} h_\alpha < CT^{1/4}(\ln T)^{\rho/2}, \quad\text{if}\ \ u_\alpha=T^{1/2}(\ln T)^{-\rho}\ \ (\rho>0). \end{equation*} \notag $$
In this case we call the maximum $x_\alpha$ $\rho$-broad. In [54] the conjecture that all local maxima $x_\alpha$ such that $|P(x_\alpha)|>\eta T^{1/4}$ ($\eta>1$) are $\rho$-broad was considered. This conjecture was verified for $x \in I=I_1 \cup I_2$, where $I_1=[10^7,3.2\cdot 10^8]$ and $I_2=[10^7,10^{12}+10^8]$ is a sufficiently representable set. Numerical experiment showed that for $x \in I$ and $\eta=3$, all local maxima are 2-broad. Hence, for $x_\alpha \in I$,
$$ \begin{equation*} |P(x_\alpha)|<CT^{1/4}\ln T. \end{equation*} \notag $$

In conclusion, we formulate some conjectures and unsolved problems on the behaviour of $|P(x)|$ for large $x$.

Conjectures

The most strong conjecture on the behaviour of $P(x)$ on the interval $[T,2T]$ was formulated at the end of § 15.

The proof of any of conjectures (1)–(4) that follow would allow one to solve the circle problem.

(1) There exists an arbitrarily large $k$ such that

$$ \begin{equation*} \overline{M}_k(T) \ll T^{1+k/4+\varepsilon} \end{equation*} \notag $$
(the quantity $\overline{M}_k(T)$ was defined in § 6).

(2) Let $|P(T)|>CT^{1/4}$. Then $s(T)=2$ (the quantity $s(T)$ was defined in § 11).

3) If $|x-T|\ll T^{1/2-\varepsilon} $, then $|P(x-U)-P(x)| \ll T^\varepsilon U^{1/2}$ (Jutila’s conjecture).

4) If $x_0 \in [T,2T]$ is a local maximum of $|P(x)|$ and $|P(x_0)|>CT^{1/4}$, then

$$ \begin{equation*} |P(x)-P(T)| \leqslant \frac{1}{2}\,|P(T)|\quad\textit{for}\ \ |x-T| \ll T^{1/2-\varepsilon}, \end{equation*} \notag $$
that is, all sufficiently high maxima are broad.

Problems

Solving any of problems (1)–(4) that follow would allow one to obtain a non-trivial estimate for $|P(x)|$.

(1) Prove that $\overline{M}_{k}(T) \ll T^{1+k/4+\varepsilon}$ for some $k>9$.

(2) Prove that if $|P(T)|>CT^{1/4}$, then $s(T)>1$.

(3) Improve the estimates in Theorem 10.10 for the quantities $E_k(T,H)$, $k=4,6$. In particular, show that $E_4(T,H)\ll T^{1+\varepsilon}$ for $H \geqslant T^{1/2}$.

(4) Prove the estimate

$$ \begin{equation*} |P(x+U)-P(x)| \ll T^\alpha U^\beta\qquad (U \ll T) \end{equation*} \notag $$
for $\alpha<3/4$ and $\alpha+5\beta/12 <1/3$ or for $\alpha<1/2$ and $3\alpha+\beta<1$ (see § 14).

Appendix A. Truncated Perron formula

Theorem A.1. Assume that a Dirichlet series

$$ \begin{equation} F(s)=\sum_{n=1}^\infty \frac{a_n}{n^s},\qquad s=\sigma+it, \end{equation} \tag{A.1} $$
converges absolutely for $\sigma>1$ and
$$ \begin{equation} \sum_{n=1}^\infty \frac{|a_n|}{n^b}\leqslant A(b)\qquad (b>1). \end{equation} \tag{A.2} $$
Let $b$, $\beta$, $x$, and $T$ be such that
$$ \begin{equation} b>1,\quad 0<C<\beta<1,\quad T>b,\quad\textit{and}\quad x>1. \end{equation} \tag{A.3} $$
Then
$$ \begin{equation} \sum_{1\leqslant n<x-1}a_n=\frac{1}{2\pi i}\int_{b-iT}^{b+iT} F(s)x^s\,\frac{ds}{s}+\Delta(x), \end{equation} \tag{A.4} $$
where for each $n \in \mathbb{Z}_+$,
$$ \begin{equation} \begin{aligned} \, \nonumber |\Delta(x)|&\ll \max_{|x-n|\leqslant 1}|a_n|\,2^b+ \frac{x^b}{T\ln(1+\beta)}\,A(b) \\ &\qquad+\frac{x}{T}\,\frac{\beta}{(1-\beta)^6\ln(1+\beta)} \max_{|x-n|<\beta x}|a_n|\cdot \begin{cases} 1, & \beta x>1, \\ 0, & \beta x<1. \end{cases} \end{aligned} \end{equation} \tag{A.5} $$

Proof. Consider the integral
$$ \begin{equation} I(x)=\frac{1}{2\pi i}\int_{b-iT}^{b+iT}F(s)x^s\,\frac{ds}{s}\,. \end{equation} \tag{A.6} $$
By (A.2) the series $F(s)$ (see (A.1)) can be integrated termwise. Therefore,
$$ \begin{equation} I(x)=\sum_{n=1}^\infty a_n I_n(x), \end{equation} \tag{A.7} $$
where
$$ \begin{equation} I_n(x)=\frac{1}{2\pi i}\int_{b-iT}^{b+iT} \biggl(\frac{x}{n}\biggr)^s\,\frac{ds}{s}\,. \end{equation} \tag{A.8} $$
We set
$$ \begin{equation} a=\frac{x}{n}\,,\qquad \lambda=\ln a=\ln\frac{x}{n} \end{equation} \tag{A.9} $$
and write the integral $I_n(x)$ in the form
$$ \begin{equation*} \begin{aligned} \, I_n(x)&=\frac{a^b}{2\pi }\int_0^T e^{it\lambda}\,\frac{dt}{b^2+t^2}+ \frac{a^b}{2\pi }\int_0^T e^{-i\lambda t}\,\frac{dt}{b-iT} \\ &=\frac{a^b b}{\pi}\int_0^T \frac{\cos \lambda t}{b^2+t^2}\,dt+ \frac{a^b}{\pi}\int_0^T \frac{t\sin \lambda t}{b^2+t^2}\,dt. \end{aligned} \end{equation*} \notag $$
Hence
$$ \begin{equation} I_n(x)=\frac{a^b b}{\pi}\int_0^\infty \frac{\cos \lambda t}{b^2+t^2}\,dt+ \frac{a^b}{\pi}\int_0^\infty \frac{t\sin \lambda t}{b^2+t^2}\,dt- \Delta_1I_n(x), \end{equation} \tag{A.10} $$
where
$$ \begin{equation} \Delta_1I_n(x)=\frac{a^b b}{\pi}\int_T^\infty \frac{\cos\lambda t}{b^2+t^2}\,dt+\frac{a^b}{\pi}\int_T^\infty \frac{t\sin \lambda t}{b^2+t^2}\,dt. \end{equation} \tag{A.11} $$
We have
$$ \begin{equation*} \int_0^\infty \frac{\cos \lambda t}{b^2+t^2}\,dt= \frac{\pi }{2b}\,e^{-|\lambda|b}\quad\text{and}\quad \int_0^\infty \frac{t\sin \lambda t}{b^2+t^2}\,dt= \frac{\pi }{2}\,e^{-|\lambda|b}\operatorname{sgn}\lambda, \end{equation*} \notag $$
where
$$ \begin{equation*} \operatorname{sgn}\lambda=\begin{cases} 1, & \lambda>0, \\ 0, & \lambda=0, \\ -1, & \lambda<0, \end{cases} \end{equation*} \notag $$
and now it follows from (A.10) that
$$ \begin{equation} I_n(x)=\frac{a^b}{2}\,e^{-|\lambda|b}(1+\operatorname{sgn}\lambda)- \Delta_1I_n(x). \end{equation} \tag{A.12} $$
Consider the quantity $\Delta_1I_n(x)$ defined by (A.11). The first integral on the right- hand side of (A.11) can easily be estimated:
$$ \begin{equation*} \biggl|\int_T^\infty \frac{\cos\lambda t}{b^2+t^2}\,dt\biggr|< \int_T^\infty\frac{dt}{t^2}=\frac{C}{T}\,. \end{equation*} \notag $$
We write the second integral in the form
$$ \begin{equation*} \int_T^\infty \frac{t\sin\lambda t}{b^2+t^2}\,dt= \int_T^\infty \frac{\sin\lambda t}{t}\,dt- b^2\int_T^\infty \frac{\sin\lambda t}{t(b^2+t^2)}\,dt. \end{equation*} \notag $$
We use the standard notation
$$ \begin{equation} \begin{aligned} \, \operatorname{si}x&=-\displaystyle\int_x^\infty \dfrac{\sin t}{t}\,dt= \operatorname{Si}x-\frac{\pi }{2}\,, \\ \operatorname{Si}x&=\displaystyle\int_0^x\dfrac{\sin t}{t}\,dt. \end{aligned} \end{equation} \tag{A.13} $$
Since
$$ \begin{equation*} \biggl|\int_T^\infty\frac{\sin\lambda t}{t(b^2+t^2)}\,dt\biggr|\leqslant \int_T^\infty\frac{dt}{t^3}=\frac{1}{2T^2}\,, \end{equation*} \notag $$
we have
$$ \begin{equation} \Delta_1I_n=-\frac{a^b}{\pi}\operatorname{si}(|\lambda T|) \operatorname{sgn}\lambda+\Delta_2I_n, \end{equation} \tag{A.14} $$
$$ \begin{equation} |\Delta_2I_n|\leqslant \frac{a^bb}{\pi}\,\frac{1}{T}+ \frac{a^b}{\pi}\,\frac{b^2}{2T^2}, \end{equation} \tag{A.15} $$
and (see (A.12))
$$ \begin{equation} I_n(x)=\frac{a^b}{2}\,e^{-\lambda|T|}(1+\operatorname{sgn}\lambda)+ \frac{a^b}{\pi}(-\operatorname{si}(|\lambda|T)) \operatorname{sgn}\lambda+\Delta_2I_n. \end{equation} \tag{A.16} $$
We can write this equality as
$$ \begin{equation} I_n(x)=\begin{cases} 1, & x>n, \\ \dfrac{1}{2}\,, & x=n, \\ 0, & x<n \end{cases}\quad+\Delta I_n, \end{equation} \tag{A.17} $$
where
$$ \begin{equation} \Delta I_n=\frac{a^b}{\pi } \biggl(-\operatorname{si}\biggl|\ln\frac{x}{n}\biggr|\biggr) \operatorname{sgn}\biggl(\ln\frac{x}{n}\biggr)+\Delta_2I_n. \end{equation} \tag{A.18} $$
Using (A.17) and (A.18) we have
$$ \begin{equation} I_n(x)=1+R_1, \qquad n<x-1, \end{equation} \tag{A.19} $$
and
$$ \begin{equation} |I_n(x)|\leqslant R_2, \qquad n>x+1. \end{equation} \tag{A.20} $$
In these formulae
$$ \begin{equation} |R_i| \leqslant \frac{1}{\pi T}\biggl(\frac{x}{n}\biggr)^b \frac{1}{|\!\ln(x/n)|}\,,\qquad i=1,2. \end{equation} \tag{A.21} $$
We write the quantity $I(x)$ (see (A.7)) as
$$ \begin{equation} I(x)=\sum_{n<x-1}a_nI_n+\sum_{n>x+1}a_nI_n+ \sum_{|n-x|\leqslant 1}a_nI_n. \end{equation} \tag{A.22} $$
The first and second sums on the right of this equality can be estimated using (A.19)(A.21).

It follows from (A.17) and (A.18) that for $0<a<2$

$$ \begin{equation} I_n(x) \leqslant C\,2^b \quad\text{if}\ \ |x-n|<1. \end{equation} \tag{A.23} $$
Hence
$$ \begin{equation} I(x)=\sum_{n<x-1}a_n+\Delta I(x), \end{equation} \tag{A.24} $$
where
$$ \begin{equation} \Delta I(x) \ll \max_{|x-n|<1}|a_n|\,2^b+S, \end{equation} \tag{A.25} $$
$$ \begin{equation} S=\sum_{|x-n|>1}|a_n|\,\frac{1}{\pi T}\biggl(\frac{x}{n}\biggr)^b \frac{1}{|\!\ln(x/n)|}\,. \end{equation} \tag{A.26} $$
Consider the quantity $S$, which we write as
$$ \begin{equation} S=S_1+S_2, \end{equation} \tag{A.27} $$
where
$$ \begin{equation} S_1=\begin{cases} \dfrac{x^b}{\pi T}\displaystyle\sum_{|x-n|<\beta x} \dfrac{|a_n|}{n^b}\,\dfrac{1}{\ln|x/n|}\,,& \beta x > 1, \\ 0,& \beta x \leqslant 1, \end{cases} \end{equation} \tag{A.28} $$
and
$$ \begin{equation} S_2=\frac{x^b}{\pi T}\sum_{|x-n|>\beta x}\frac{|a_n|}{n^b}\, \frac{1}{\ln|x/n|}\,. \end{equation} \tag{A.29} $$
In the sum $S_1$ the numbers $n$ are such that $(1-\beta)x<n<(1+\beta)x$, and in the sum $S_2$ the numbers $n$ are such that $n<(1-\beta)x$ or $n>(1+\beta)x$. Therefore,
$$ \begin{equation} |S_1| \ll \frac{\beta x}{T(1-\beta)^b} \max_{|x-n|<\beta x}|a_n|\frac{1}{\ln(1+\beta)}\,, \quad\text{if}\ \ \beta <1, \end{equation} \tag{A.30} $$
and
$$ \begin{equation} |S_2| \ll \frac{x^b}{T}\,\frac{1}{\ln(1+\beta)}\,A(b). \end{equation} \tag{A.31} $$
The required result (A.4), (A.5) now follows from (A.24), (A.27), (A.30), and (A.31). This proves Theorem A.1.

Consider the example of interest to us, when

$$ \begin{equation} a_n=r(n)\quad\text{and}\quad F(s)=4\zeta(s)L(s|\chi_4). \end{equation} \tag{A.32} $$
We have
$$ \begin{equation} \begin{gathered} \, L(b|\chi_4) \leqslant C_1(b), \\ \zeta(s) \leqslant \frac{C_2}{b-1}\qquad (\sigma>b), \end{gathered} \end{equation} \tag{A.33} $$
and therefore
$$ \begin{equation} A(b)=C(b)\,\frac{1}{b-1}\,. \end{equation} \tag{A.34} $$
Assume that $1<b<c$ and $c<\beta<1$. For example, let $\beta=1/2$. We have $\overline{r}(\gamma x) \leqslant C(\gamma)\overline{r}(x)$, and now it follows from (A.4) that
$$ \begin{equation} A(x)=\sum_{n \leqslant x}r(n)=\frac{1}{2\pi i}\int_{b-iT}^{b+iT} F(s)x^s\,\frac{ds}{s}+R_1(x), \end{equation} \tag{A.35} $$
$$ \begin{equation} |R_1(x)| \ll \frac{x^b}{T}\,\frac{1}{b-1}+\overline{r}(x) \biggl(\frac{x}{T}+1\biggr). \end{equation} \tag{A.36} $$
Note that by (5.37) we are interested in the case when
$$ \begin{equation*} b=1+\frac{1}{\ln x}; \end{equation*} \notag $$
thus, $x^b \ll x$.

Appendix B. One general theorem

In this appendix we prove a theorem will be proved from which (under certain conditions) we can obtain a pointwise estimate for the quantity $|A(x)|$ under the available estimates of its local moments

$$ \begin{equation} E(|A|^p)=\frac{1}{2H}\int_{T-H}^{T+H}|A(x)|^p\,dx. \end{equation} \tag{B.1} $$
It is assumed that this quantity exists. Fix $T$, $H$, $p$, $a(T)$, $\lambda(T)$, and $s(T)$ satisfying
$$ \begin{equation*} T \gg 1, \quad 0<H \leqslant T,\quad p \geqslant 1, \quad a(T) \geqslant 1, \quad 0< \lambda(T) <1,\quad\text{and}\quad s(T)>0. \end{equation*} \notag $$

Theorem B.1. For $x \geqslant 0$ let $A(x)$ be a quantity such that

$$ \begin{equation} |A(T)-A(x)| \leqslant \lambda(T)|A(T)| \end{equation} \tag{B.2} $$
for $x \in \Omega := \{x \geqslant 0\colon |x-T| \leqslant H\}$, provided that
$$ \begin{equation} |x-T| \leqslant C_1\,\frac{|A(T)|^s}{a(T)}\,. \end{equation} \tag{B.3} $$
Also assume that
$$ \begin{equation} E(|A|^p) \leqslant F_p \equiv F_p(T,H), \end{equation} \tag{B.4} $$
$$ \begin{equation} F_p \leqslant C_2\bigl(Ha(T)\bigr)^{p/s}, \end{equation} \tag{B.5} $$
and
$$ \begin{equation} |A(T)| \leqslant C_3\bigl(Ha(T)\bigr)^{1/s}. \end{equation} \tag{B.6} $$
Then for any such $T$
$$ \begin{equation} |A(T)| \leqslant \frac{C_4}{(1-\lambda)^\alpha} \bigl(F_pHa(T)\bigr)^{1/(p+s)}=: G,\quad\textit{where}\ \ \alpha=\frac{p}{s+p}<1, \end{equation} \tag{B.7} $$
and the constants $C_i=C_i(C_1,p,s)$, $i=2,3,4$, can be indicated explicitly.

Proof. Consider a neighbourhood $V$ of the point $x=T$:
$$ \begin{equation} V=\{x>0 \colon |x-T|<\delta\},\qquad \delta=C_1\,\frac{|A(x)|^s}{a(T)}\,. \end{equation} \tag{B.8} $$
By condition (B.6),
$$ \begin{equation} \delta \leqslant H \quad (C_1C_3^s \leqslant 1) \end{equation} \tag{B.9} $$
and therefore $V \subset \Omega$.

By condition (B.2),

$$ \begin{equation} |A(x)| \geqslant \bigl(1-\lambda(T)\bigr)|A(T)|,\qquad x \in V. \end{equation} \tag{B.10} $$
We consider the set
$$ \begin{equation} L_b=\{x \in \Omega \colon |A(x)| \leqslant b\} \end{equation} \tag{B.11} $$
and choose $b$ from the condition
$$ \begin{equation*} \frac{F_p}{b^p}=\frac{\delta}{4H}, \end{equation*} \notag $$
that is,
$$ \begin{equation} b=\biggl(\frac{4}{C_1}\biggr)^{1/p}(F_pH)^{1/p}|A(T)|^{-s/p}a^{1/p}(T). \end{equation} \tag{B.12} $$
Consider the set
$$ \begin{equation} \overline{L}_b=\Omega\setminus L_b=\{x \in \Omega\colon |A(T)|>b\}. \end{equation} \tag{B.13} $$
On the set $\mathbb{R}_+$ consider the probability measure $\mathcal{P}=\mathcal{P}(T,H)$ defined by the density distribution of probabilities
$$ \begin{equation} \rho(x)=\begin{cases} \dfrac{1}{2H}\,, & |x-T| \leqslant H, \\ 0, & |x-T|>H. \end{cases} \end{equation} \tag{B.14} $$
We have
$$ \begin{equation} \mathcal{P}\{U\}=\frac{\delta}{H}\,. \end{equation} \tag{B.15} $$
By Chebyshev’s inequality
$$ \begin{equation} \mathcal{P}\{\overline{L}_b\} \leqslant \frac{E(|A|^p)}{b^p} \leqslant \frac{F_p}{b^p}=\frac{\delta}{4H}. \end{equation} \tag{B.16} $$
Therefore,
$$ \begin{equation} \mathcal{P}\{L_b\} \geqslant 1-\frac{\delta}{4H}. \end{equation} \tag{B.17} $$
Hence
$$ \begin{equation} \mathcal{P}\{V\}+\mathcal{P}\{L_b\} \geqslant 1+\frac{3}{4}\,\frac{\delta}{H}\,. \end{equation} \tag{B.18} $$
This inequality implies that
$$ \begin{equation} \mathcal{P}\{V\cap L_b\}\geqslant \frac{3}{4}\,\frac{\delta}{H}\,. \end{equation} \tag{B.19} $$
Hence the set $V\cap L_b$ is non-empty. Let $\widetilde{x} \in V\cap L_b$. We have (see (B.10) and (B.11))
$$ \begin{equation} |A(\widetilde{x})| \geqslant (1-\lambda(T))|A(T)|\quad\text{and}\quad |A(\widetilde{x})| \leqslant b. \end{equation} \tag{B.20} $$
Therefore,
$$ \begin{equation} (1-\lambda(T))|A(T)| \leqslant b=\biggl(\frac{4}{C_1}\biggr)^{1/p} |A(T)|^{-s/p}a^{1/p}(T). \end{equation} \tag{B.21} $$
The required estimate (B.7) now follows from (B.21) for $C_4=(4/C_1)^\alpha$. This proves Theorem B.1.

The above estimate (B.7) is meaningful (see (B.6)) for $G<C_3(Ha(T))^{1/s}$, that is, for $C_2 \leqslant C_1^{-1}(C_3/4)^{p+s}$.

All the ideas of the proof of Theorem B.1 can be found in [46], where the case $A(x)=P(x)$ was considered and a more general method for defining probability measures on $\mathbb{R}_+$ was discussed.


Bibliography

1. E. Landau, Vorlesungen über Zahlentheorie, v. 2, Hierzel, Leipzig, 1927, viii+308 pp.  mathscinet  zmath
2. E. C. Titchmarsh, The theory of the Riemann zeta function, Clarendon Press, Oxford, 1951, vi+346 pp.  mathscinet  zmath
3. A. Ivić, The Rieman zeta function. The theory of the Riemann zeta function with applications, Wiley-Intersci. Publ., John Wiley & Sons, Inc., New York, 1985, xvi+517 pp.  mathscinet  zmath
4. A. A. Karatsuba, Basic analytic number theory, Springer-Verlag, Berlin, 1993, xiv+222 pp.  crossref  mathscinet  zmath
5. E. Krätzel, Lattice points, Math. Appl. (East European Ser.), 33, Kluwer Acad. Publ., Dordrecht, 1988, 320 pp.  mathscinet  zmath
6. Kai-Man Tsang, “Recent progress on the Dirichlet divisor problem and the mean square of Riemann zeta function”, Sci. China Math., 53:9 (2010), 2561–2572  crossref  mathscinet  zmath
7. D. A. Popov, “Circle problem and the spectrum of the Laplace operator on closed 2-manifolds”, Russian Math. Surveys, 74:5 (2019), 909–925  mathnet  crossref  mathscinet  zmath  adsnasa
8. A. G. Postnikov, Introduction to analytic number theory, Transl. Math. Monogr., 68, Amer. Math. Soc., Providence, RI, 1988, vi+320 pp.  crossref  mathscinet  zmath
9. E. Bombieri and H. Iwaniec, “On the order of $\zeta(\frac{1}{2}+it)$”, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 13:3 (1986), 449–472  mathscinet  zmath
10. S. W. Graham and G. Kolesnik, Van der Corput's method of exponential sums, London Math. Soc. Lecture Note Ser., 126, Cambridge Univ. Press, Cambridge, 1991, vi+120 pp.  crossref  mathscinet  zmath
11. M. N. Huxley, Area, lattice points, and exponential sums, London Math. Soc. Monogr. (N. S.), 13, The Clarendon Press, Oxford Univ. Press, New York, 1996, xii+494 pp.  mathscinet  zmath
12. G. Kolesnik, “On the method of exponential pairs”, Acta Arith., 45:2 (1985), 115–143  crossref  mathscinet  zmath
13. H. Iwaniec and C. J. Mozzochi, “On the divisor and circle problems”, J. Number Theory, 29:1 (1988), 60–93  crossref  mathscinet  zmath
14. Xiaochun Li and Xuerui Yang, An improvement on Gauss's circle problem and Dirichlet's divisor problem, 2023, 32 pp., arXiv: 2308.14859v1
15. G. Voronoï, “Sur le développement, à l'aide des fonctions cylindriques, des sommes doubles $\sum f(pm^2+2qmn+rn^2)$, où $pm^2+2qmn+rn^2$ est une forme positive à coefficients entiers”, Verhandlungen des dritten internationalen Mathematiker-Kongresses (Heidenberg 1904), Teubner, Leipzig, 1905, 241–245  zmath
16. G. H. Hardy, “On the expression of number as the sum of two squares”, Quat. J. Pure Appl. Math., 46 (1915), 263–283  zmath
17. K. F. Ireland and M. I. Rosen, A classical introduction to modern number theory, Grad. Texts in Math., 84, Springer-Verlag, New York–Berlin, 1982, xiii+341 pp.  mathscinet  zmath
18. E. Hecke, Vorlesungen über die Theorie der algebraischen Zahlen, Akad. Verlagsges., Leipzig, 1923, viii+265 pp.  mathscinet  zmath
19. S. Bochner, Lectures on Fourier integrals, With an author's supplement on monotonic functions, Stieltjes integrals, and harmonic analysis, Ann. of Math. Stud., 42, Princeton Univ. Press, Princeton, NJ, 1959, viii+333 pp.  crossref  mathscinet  zmath
20. G. N. Watson, A treatise on the theory of Bessel functions, 2nd ed., Cambridge Univ. Press, Cambridge, England; The Macmillan Co., New York, 1944, vi+804 pp.  mathscinet  zmath
21. E. C. Titchmarsh, Introduction to the theory of Fourier integrals, Oxford, Clarendon Press, 1937, x+390 pp.  mathscinet  zmath
22. G. H. Hardy and M. Reisz, The general theory of Dirichlet's series, Cambridge Tracts in Math. and Math. Phys., 18, Cambridge Univ. Press, Cambridge, 1964, vii+78 pp.  mathscinet  zmath
23. D. A. Popov, “Spectrum of the Laplace operator on closed surfaces”, Russian Math. Surveys, 77:1 (2022), 81–97  mathnet  crossref  mathscinet  zmath  adsnasa
24. G. H. Hardy and E. Landau, “The lattice points of a circle”, Proc. Roy. Soc. London Ser. A, 105:731 (1924), 244–258  crossref  zmath
25. K. Prachar, Primzahlverteilung, Springer-Verlag, Berlin–Göttingen–Heidelberg, 1957, x+415 pp.  mathscinet  zmath
26. K. Chandrasekharan, Arithmetical functions, Grundlehren Math. Wiss., 167, Springer-Verlag, New York–Berlin, 1970, xi+231 pp.  crossref  mathscinet  zmath
27. M. Abramowitz and I. A. Stegun (eds.), Handbook of mathematical functions with formulas, graphs and mathematical tables, National Bureau of Standards Applied Mathematics Series, 55, Superintendent of Documents, U.S. Government Printing Office, Washington, DC, 1964, xiv+1046 pp.  mathscinet  zmath
28. A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi, Higher transcendental functions, Based, in part, on notes left by H. Bateman, v. 2, McGraw-Hill Book Company, Inc., New York–Toronto–London, 1953, xvii+396 pp.  mathscinet  zmath  adsnasa
29. I. S. Gradshteyn and I. M. Ryzhik, Table of integrals, series, and products, 7th ed., Elsevier/Academic Press, Amsterdam, 2007, xlviii+1171 pp.  mathscinet  zmath
30. W. G. Nowak, “Lattice points of a circle: an improved mean-square asymptotics”, Acta Arith., 113:3 (2004), 259–272  crossref  mathscinet  zmath
31. Yuk-Kam Lau and Kai-Man Tsang, “On the mean square formula of the error term in the Dirichlet divisor problem”, Math. Proc. Cambridge Philos. Soc., 146:2 (2009), 277–287  crossref  mathscinet  zmath  adsnasa
32. H. L. Montgomery and R. C. Vaughan, “Hilbert's inequality”, J. London Math. Soc. (2), 8 (1974), 73–82  crossref  mathscinet  zmath
33. Kai-Man Tsang, “Higher-power moments of $\Delta(x)$, $E(t)$ and $P(x)$”, Proc. London Math. Soc. (3), 65:1 (1992), 65–84  crossref  mathscinet  zmath
34. Wenguang Zhai, “On higher-power moments of $\Delta(x)$”, Acta Arith., 112:4 (2004), 367–395  crossref  mathscinet  zmath; II, 114:1 (2004), 35–54  crossref  mathscinet  zmath; III, 118:3 (2005), 263–281  crossref  mathscinet  zmath
35. A. Ivić, “Large values of the error term in the divisor problem”, Invent. Math., 71:3 (1983), 513–520  crossref  mathscinet  zmath  adsnasa
36. D. R. Heath-Brown, “The distribution and moments of the error term in the Dirichlet divisor problem”, Acta Arith., 60:4 (1992), 389–415  crossref  mathscinet  zmath
37. G. H. Hardy, “On Dirichlet's divisor problem”, Proc. London Math. Soc. (2), 15 (1916), 1–25  crossref  mathscinet  zmath
38. G. H. Hardy, “The average order of the arithmetical functions $P(x)$ and $\Delta(x)$”, Proc. London Math. Soc. (2), 15 (1916), 192–213  crossref  mathscinet  zmath
39. K. S. Gangadharan, “Two classical lattice point problems”, Proc. Cambridge Philos. Soc., 57:4 (1961), 699–721  crossref  mathscinet  zmath
40. S. Soundararajan, “Omega results for the divisor and circle problems”, Int. Math. Res. Not., 2003:36 (2003), 1987–1998  crossref  mathscinet  zmath
41. D. R. Heath-Brown and K. Tsang, “Sign changes of $E(t)$, $\Delta(x)$, and $P(x)$”, J. Number Theory, 49:1 (1994), 73–83  crossref  mathscinet  zmath
42. M. Kac, Statistical independence in probability, analysis and number theory, Carus Math. Monogr., 12, John Wiley and Sons, Inc., New York, 1959, xiv+93 pp.  mathscinet  zmath
43. Yuk-Kam Lau and Kai-Man Tsang, “Moments over short intervals”, Arch. Math. (Basel), 84:3 (2005), 249–257  crossref  mathscinet  zmath
44. P. M. Bleher, Zheming Cheng, F. J. Dyson, and J. L. Lebowitz, “Distribution of the error term for the number of lattice points inside a shifted circle”, Comm. Math. Phys., 154:3 (1993), 433–469  crossref  mathscinet  zmath  adsnasa
45. Yuk-Kam Lau, “On the tails of the limiting distribution function of the error term in the Dirichlet divisor problem”, Acta Arith., 100:4 (2001), 329–337  crossref  mathscinet  zmath
46. D. A. Popov, “Bounds and behaviour of the quantities $P(x)$ and $\Delta(x)$ on short intervals”, Izv. Math., 80:6 (2016), 1213–1230  mathnet  crossref  mathscinet  zmath  adsnasa
47. A. Ivić and P. Sargos, “On the higher moments of the error term in the divisor problem”, Illinois J. Math., 51:2 (2007), 353–377  crossref  mathscinet  zmath
48. L. Hörmander, The analysis of linear partial differential operators, v. I, Grundlehren Math. Wiss., 256, Distribution theory and Fourier analysis, Springer-Verlag, Berlin, 1983, ix+391 pp.  crossref  mathscinet  zmath
49. O. Robert and P. Sargos, “Three-dimensional exponential sums with monomials”, J. Reine Angew. Math., 2006:591 (2006), 1–20  crossref  mathscinet  zmath
50. M. Jutila, “On the divisor problem for short intervals”, Ann. Univ. Turku. Ser. A I, 1984, no. 186, 23–30  mathscinet  zmath
51. A. Ivić and Wenguang Zhai, “On the Dirichlet divisor problem in short intervals”, Ramanujan J., 33:3 (2014), 447–465  crossref  mathscinet  zmath
52. M. A. Korolev and D. A. Popov, “On Jutila's integral in the circle problem”, Izv. Math., 86:3 (2022), 413–455  mathnet  crossref  mathscinet  zmath  adsnasa
53. A. Ivić, “On the divisor function and the Riemann zeta-function in short intervals”, Ramanujan J., 19:2 (2009), 207–224  crossref  mathscinet  zmath
54. D. A. Popov and D. V. Sushko, “Numerical investigation of the properties of remainder in Gauss's circle problem”, Comput. Math. Math. Phys., 62:12 (2022), 2008–2022  mathnet  crossref  mathscinet  zmath  adsnasa

Citation: D. A. Popov, “Voronoi's formulae and the Gauss problem”, Uspekhi Mat. Nauk, 79:1(475) (2024), 59–134; Russian Math. Surveys, 79:1 (2024), 53–126
Citation in format AMSBIB
\Bibitem{Pop24}
\by D.~A.~Popov
\paper Voronoi's formulae and the Gauss problem
\jour Uspekhi Mat. Nauk
\yr 2024
\vol 79
\issue 1(475)
\pages 59--134
\mathnet{http://mi.mathnet.ru/rm10162}
\crossref{https://doi.org/10.4213/rm10162}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4774054}
\transl
\jour Russian Math. Surveys
\yr 2024
\vol 79
\issue 1
\pages 53--126
\crossref{https://doi.org/10.4213/rm10162e}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001292806100002}
Linking options:
  • https://www.mathnet.ru/eng/rm10162
  • https://doi.org/10.4213/rm10162e
  • https://www.mathnet.ru/eng/rm/v79/i1/p59
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Успехи математических наук Russian Mathematical Surveys
    Statistics & downloads:
    Abstract page:247
    Russian version PDF:25
    English version PDF:21
    Russian version HTML:49
    English version HTML:60
    References:32
    First page:27
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024