Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2025, Volume 216, Issue 2, Pages 239–256
DOI: https://doi.org/10.4213/sm10116e
(Mi sm10116)
 

This article is cited in 1 scientific paper (total in 1 paper)

Uniform rational approximation of the odd and even Cauchy transforms

T. S. Mardvilko

Faculty of Mechanics and Mathematics, Belarusian State University, Minsk, Belarus
References:
Abstract: Best uniform rational approximations of the odd and even Cauchy transforms are considered. The results obtained form a basis for finding the weak asymptotics of best uniform rational approximations of the odd extension of the function $x^{\alpha}$, $x\in[0,1]$, to $[-1,1]$ for all $alpha\in(0,+\infty)\setminus(2\mathbb N-1)$, which complements some results due to Vyacheslavov. The strong asymptotics of the best rational approximations of this function on $[0,1]$ and its even extension to $[-1,1]$ were found by Stahl. It follows from these results that for $alpha\in(0,+\infty)\setminus\mathbb N$ the best rational approximations of the even and odd extensions of the above function show the same weak asymptotic behaviour.
Bibliography: 29 titles.
Keywords: best rational approximations, power function, Cauchy transform, even and odd extensions of a function, Padé approximations.
Funding agency Grant number
ГПНИ "Конвергенция-2025" ГР 20211888
This research was carried out with the support of the National Academy of Sciences of Belarus in the framework of the State Programme of Scientific Research “Convergence — 2025” (project no. ГР 20211888).
Received: 11.05.2024 and 06.09.2024
Published: 16.04.2025
Bibliographic databases:
Document Type: Article
MSC: Primary 41A20, 41A25, 41A50; Secondary 30E20
Language: English
Original paper language: Russian

§ 1. Introduction

1.1.

We use the following notation: ${\mathcal P}_n$ is the set of real algebraic polynomials of degree at most $n$, $n\in\mathbb{N}_0=\mathbb{N}\cup\{0\}$, and

$$ \begin{equation*} {\mathcal R}_{n,m}=\biggl\{\frac{p}{q}\colon p\in{\mathcal P}_n,q\in{\mathcal P}_m, q\not\equiv0\biggr\} \end{equation*} \notag $$
is the set of real rational functions whose numerator and denominator have degree at most $n$ and $m$, respectively.

Let $C[a,b]$ denote the space of continuous real-valued functions on the closed interval $[a,b]$. For $f\in C[a,b]$ set

$$ \begin{equation*} \|f\|_{[a,b]}=\max\{|f(x)|\colon x\in [a,b]\}. \end{equation*} \notag $$
We define the best rational approximation of $f\in C[a,b]$ by the set ${\mathcal R}_{n,m}$:
$$ \begin{equation} E_{n,m}(f;[a,b])=\inf\bigl\{\|f-r\|_{[a,b]}\colon r\in{\mathcal R}_{n,m}\bigr\}. \end{equation} \tag{1.1} $$
In particular, $R_n(f;[a,b])=E_{n,n}(f;[a,b])$ is the best approximation of $f$ on $[a,b]$ by rational functions of degree at most $n$, and $E_n(f;[a,b])=E_{n,0}(f;[a,b])$ is the best polynomial approximation of $f$ on $[a,b]$.

The best rational approximations of the functions $x^{\alpha}$, $x\in[0,1]$, and $|x|^{\alpha}$, $x\in[-1,1]$, were the subject of papers by Newman [1], Gonchar [2], Bulanov [3], Tzimbarilio [4], Vyacheslavov [5]–[7], Stahl [8], [9] and other authors. In the recent paper [10] Rovba and Potseiko used the classical method of interpolation at the zeros of Chebyshev–Markov rational functions to investigate the rate of convergence of the best rational approximation of $|x|^{\alpha}$, $\alpha\in(0,+\infty)\setminus2\mathbb N$, on $[-1,1]$. The sharpest results are due to Stahl (see [8] and [9], Ch. 8, § 5), namely, he found the strong asymptotics for $\alpha\in(0,+\infty)\setminus\mathbb{N}$:

$$ \begin{equation*} R_n\bigl(x^{\alpha};[0,1]\bigr)\sim 2^{2\alpha+2}|{\sin\pi\alpha}| \exp(-2\pi\sqrt{\alpha n}), \qquad n\to\infty. \end{equation*} \notag $$

Stahl also showed that for $\alpha\in(0,+\infty)\setminus2\mathbb{N}$ the following strong asymptotic formula holds for the function $|x|^{\alpha}$, $x\in[-1,1]$, the even extension of $x^{\alpha}$, $x\in[0,1]$:

$$ \begin{equation} R_n\bigl(|x|^{\alpha};[-1,1]\bigr)\sim 2^{\alpha+2}\biggl|\sin\frac{\pi\alpha}{2}\biggr| \exp(-\pi\sqrt{\alpha n}), \qquad n\to\infty. \end{equation} \tag{1.2} $$

Bernstein (see [11]) paid much attention to the analogous problem in the case of polynomial approximations. In particular, he showed that if $\alpha>0$ and ${\alpha}/{2}\notin\mathbb N$, then

$$ \begin{equation*} E_n\bigl(|x|^{\alpha};[-1,1]\bigr)\sim \frac{\mu(\alpha)}{n^{\alpha}}, \qquad n\to\infty, \end{equation*} \notag $$
where $\mu(\alpha)>0$ is a quantity depending on $\alpha$. He also showed that $\mu(\alpha)$ is equal to the best real approximation of $|x|^{\alpha}$ on $\mathbb R$ by entire functions of exponential type at most one.

For best polynomial approximations of the function $|x|^{\alpha}\operatorname{sign} x$, $x\in[-1,1]$, the odd extension of $x^{\alpha}$, $x\in[0,1]$, it follows from results due to Bernstein [11] and Ibragimov [12] that if $\alpha>0$ and $(\alpha+1)/{2}\notin\mathbb N$, then

$$ \begin{equation*} E_n\bigl(|x|^{\alpha}\operatorname{sign} x;[-1,1]\bigr)\sim \frac{\lambda(\alpha)}{n^{\alpha}}, \qquad n\to\infty, \end{equation*} \notag $$
where $\lambda(\alpha)>0$ is equal to the best approximation of $|x|^{\alpha}\operatorname{sign} x$ on $\mathbb R$ by entire functions of exponential type at most one. Further results in this direction can be found in [13] and [14].

1.2.

Our main result here is Theorem 1, where we describe the weak asymptotic behaviour of the best uniform approximations of the function $|x|^{\alpha}\operatorname{sign} x$, $x\in[-1,1]$. We say there that two infinitesimal sequences $\{a_n\}_{n=1}^{\infty}$ and $\{b_n\}_{n=1}^{\infty}$ are weakly equivalent (written $a_n\asymp b_n$, $n\in\mathbb{N}$) if there exist positive constants $c_1\geqslant c_2$ such that $c_1\geqslant{a_n}/{b_n}\geqslant c_2$ for all $n\in\mathbb{N}$.

Theorem 1. Let $\alpha\in(0,+\infty)$ be distinct from an odd integer. Then the following weak asymptotics hold:

$$ \begin{equation*} R_n\bigl(|x|^{\alpha}\operatorname{sign}x;[-1,1]\bigr)\asymp \exp(-\pi\sqrt{\alpha n}), \qquad n\in\mathbb{N}, \end{equation*} \notag $$
where the positive quantities implicit in the symbol $\asymp$ only depend on $\alpha$.

The lower bound in Theorem 1 is due to Vyacheslavov [7]. He also obtained the upper bound in Theorem 1 for $\alpha\in\mathbb{Q}$ (see [5] and [7]). For arbitrary $\alpha>0$, $(\alpha+ 1)/{2}\notin\mathbb N$, the reader can find weaker estimates for uniform rational approximations of the function $|x|^{\alpha}\operatorname{sign} x$ for $x\in[-1,1]$ in [2] and [15].

It follows from Theorem 1 and Stahl’s result in (1.2) that the best rational approximations of the even and odd extensions of the power function $x^{\alpha}$, $x\in[0,1]$, show the same weak asymptotic behaviour for $\alpha\in(0,+\infty)\setminus\mathbb N$.

1.3.

Note that for arbitrary functions $f\in C[0,1]$, $f(0)=0$, their even and odd extensions to $[-1,1]$ can have distinct orders of best uniform rational approximations, which can, but need not coincide with $R_n(f;[0,1])$. For example, if

$$ \begin{equation*} f(x)=\biggl(\log\frac{a}{x}\biggr)^{-\beta}, \quad 0<x\leqslant1; \qquad f(0)=0, \end{equation*} \notag $$
where $\beta>0$, $a>1$, then the following order estimates hold:
$$ \begin{equation*} R_n(f;[0,1])\asymp R_n(f(|x|);[-1,1])\asymp\frac{1}{n^{1+\beta}}, \qquad n\geqslant1, \end{equation*} \notag $$
and
$$ \begin{equation*} R_n(f(|x|)\operatorname{sign} x;[-1,1])\asymp\frac{1}{n^{\beta}}, \qquad n\geqslant1. \end{equation*} \notag $$
For details, see [16].

However, as shown in [16], the best uniform rational approximation of the even extension of a function to $[-1,1]$ can be estimated in terms of its best uniform rational approximations on $[0,1]$. In particular, the following equivalence holds for ${n\in\mathbb N}$:

$$ \begin{equation*} R_n(f(|x|);[-1,1])=O(n^{-\alpha}) \quad \Longleftrightarrow\quad R_n(f;[0,1])=O(n^{-\alpha}). \end{equation*} \notag $$

On the other hand best uniform rational approximations of the odd extension of a function to $[-1,1]$ cannot in general be estimated from above by the best uniform rational approximations of $f$ on $[0,1]$. As shown in [15], an arbitrarily rapid decrease of the best uniform rational approximations of $f$ on $[0,1]$ can come alongside an arbitrarily slow decrease of the best uniform rational approximations of the odd extension of $f$ to $[-1,1]$.

In [17] we considered the best uniform polynomial approximations of the even and odd extensions of a function to $[-1,1]$.

§ 2. Rational approximations of the even and odd Cauchy transforms

2.1.

Let $\mu$ be a positive Borel measure with compact support $\operatorname{supp}\mu\subset\mathbb R$. Then the Cauchy transform of $\mu$, that is, the function

$$ \begin{equation*} \widehat{\mu}(z) = \int\frac{d\mu(t)}{t-z}, \qquad z\in\mathbb C\setminus\operatorname{supp}\mu, \end{equation*} \notag $$
is called a Markov function. The best rational approximations of Markov functions were considered by Gonchar [18], Ganelius [19], Andersson [20], Stahl and Totic [21], Pekarskii [22] and other authors.

Let $\mu$ be an absolutely continuous increasing function on $[0,1]$, $\mu(0)=0$, and let

$$ \begin{equation} 0<\int_{0}^{1}\frac{d\mu(t)}{t}<\infty. \end{equation} \tag{2.1} $$
Consider the odd and even extensions of $\mu$ to $[-1,1]$,
$$ \begin{equation*} \mu^-(t)=\mu(|t|)\operatorname{sign} t\quad\text{and} \qquad \mu^+(t)=\mu(|t|), \end{equation*} \notag $$
respectively.

For $x\in[-1,1]$ consider the odd function

$$ \begin{equation} f^-(x)=\frac{1}{2}\int_{-1}^1\frac{d\mu^-(t)}{x-it}=x\int_0^1\frac{d\mu(t)}{t^2+x^2} \end{equation} \tag{2.2} $$
and the even function
$$ \begin{equation} g^+(x)=\frac{1}{2i}\int_{-1}^1\frac{d\mu^+(t)}{x-it}=\int_0^1\frac{t\,d\mu(t)}{t^2+x^2}. \end{equation} \tag{2.3} $$

If (2.1) holds, then the functions in (2.2) and (2.3) are nontrivial and continuous on $[-1,1]$.

If $r^*\in{\mathcal R}_{n,m}$ delivers the infimum in (1.1), then $r^*$ is called the best rational approximation from ${\mathcal R}_{n,m}$. It is known (see [9], Ch. 7, § 2) that such an approximation $r^*$ exists and is uniquely defined.

It follows from the Chebyshev criterion of best rational approximations that if $g(x)\in C[0,1]$ and $r^*_{n,m}(x)\in{\mathcal R}_{nm}$ is its best rational approximation, then $r^*_{n,m}(x^2)\!\in\!{\mathcal R}_{2n,2m}$ is the best rational approximation to $g(x^2)\!\in\! C[-1,1]$. Therefore,

$$ \begin{equation} E_{2n,2m}\bigl(g(x^2);[-1,1]\bigr)=E_{n,m}\bigl(g(x);[0,1]\bigr). \end{equation} \tag{2.4} $$

For an even function $h\in C[-1,1]$ the best rational approximation is also even (see [9], Ch. 7, § 2), so in examining the best rational approximations of $h$ we can limit ourselves to $E_{2n,2m}(h;[-1,1])$. In a similar way, for odd $h \in C[-1,1]$ we limit ourselves to $E_{2n+1,2m}(h;[-1,1])$.

We let $c, c_1, c_2,\dots$ denote positive quantities. We indicate the parameters on which they depend in parentheses where necessary.

2.2. Approximations of the function $f^-$

To find $E_{2n+1,2m}(f^-;[-1,1])$ we use the method in [22] based on the use of multipoint Padé approximations, as proposed by Gonchar in [18].

For $n, m\in\mathbb{N}_0$ let $\Omega_{n+m+1} = \{x_1, x_2,\dots, x_{n+m+1}\}$ denote a set of pairwise distinct points $x_k\in(0,1]$, and let

$$ \begin{equation*} \omega_{n+m+1}(x)=\prod_{k=1}^{n+m+1}(x-x_k^2). \end{equation*} \notag $$

Lemma 1. Let $f^-(x)$ be a function of the form (2.2). Then the system of functions

$$ \begin{equation} x^{2m-2}f^-(x), x^{2m-4}f^-(x), \dots, x^2f^-(x), f^-(x), x, x^3, \dots, x^{2n+1} \end{equation} \tag{2.5} $$
is Chebyshev on $\Omega_{n+m+1}$, where $n,m\in\mathbb{N}_0$ and $n\geqslant m-1$. Here for $m=0$ system (2.5) is meant to consist of $x, x^3, \dots, x^{2n+1}$.

Proof. For $m=0$ the required result holds because a nontrivial polynomial ${p\in{\mathcal P}_n}$ can have at most $n$ zeros on $(0,1]$. Therefore, $xp(x^2)$ can also have at most $n$ zeros on $(0,1]$. Hence $x, x^3, \dots, x^{2n+1}$ is a Chebyshev system on $\Omega_{n+1}$. Let ${m\geqslant1}$. Suppose the system is not Chebyshev. In this case there exist polynomials ${q\in {\mathcal P}_{m-1}}$, $m\in\mathbb{N}$, and $p\in{\mathcal P}_n$, $n\in\mathbb{N}_0$, at least one of which is distinct from identical zero, such that
$$ \begin{equation} 2q(x_k^2)f^-(x_k) + x_k p(x_k^2)=0, \qquad x_k\in \Omega_{n+m+1}. \end{equation} \tag{2.6} $$
Note that $q\not\equiv 0$, because otherwise it would follow from (2.6) that $p\equiv 0$.

Consider the polynomials

$$ \begin{equation*} Q(x)=q^2(x^2)\quad\text{and} \quad P(x)=xp(x^2)q(x^2) \end{equation*} \notag $$
and the function
$$ \begin{equation*} \varphi(z) = (2Qf^- + P)(z). \end{equation*} \notag $$

Note that, as the Markov function $f^-(x)$ is odd, it follows from (2.6) that

$$ \begin{equation} \varphi(z)=0 \quad\text{for } z=\pm x_k \quad\text{and} \quad x_k\in \Omega_{n+m+1}. \end{equation} \tag{2.7} $$

We represent $\varphi(z)$ in the form

$$ \begin{equation} \varphi(z)=\int_{-1}^{1}\frac{Q(it)}{z-it}\,d\mu^-(t) + \int_{-1}^{1}\frac{Q(z) - Q(it)}{z-it}\,d\mu^-(t) +P(z). \end{equation} \tag{2.8} $$

Let $f$ be an analytic function in a neighbourhood of the set $\overline{G} = G\cup \partial G$, where $\partial G$ is the closed rectifiable contour bounding the domain $G$, and $\{\pm x_k\}\subset G$. Let $\Lambda (f)$ denote the divided difference of $f$ on the points $\pm x_k$, $x_k\in \Omega_{n+m+1}$. This divided difference can be represented in the form (see [23], Ch. 1, § 4)

$$ \begin{equation} \Lambda (f) = \frac{1}{2\pi i}\int_{\partial G}\frac{f(z)\,dz}{\omega_{n+m+1}(z^2)}, \end{equation} \tag{2.9} $$
where the integral is taken in the positive direction relative to $G$.

In what follows we assume that $[-i,i]\cap\overline{G}=\varnothing$ and set

$$ \begin{equation*} F(z) = \frac{1}{(z-it)\omega_{n+m+1}(z^2)}. \end{equation*} \notag $$

Applying the residue theorem to $\overline{\mathbb{C}}\setminus\overline{G}$, from (2.9) we obtain

$$ \begin{equation} \Lambda\biggl(\frac{1}{z-it}\biggr)=-\Bigl(\operatorname*{Res}_{\infty}F(z) +\operatorname*{Res}_{it}F(z)\Bigr) =\frac{(-1)^{n+m}}{\prod_{k=1}^{n+m+1}(t^2+x_k^2)}. \end{equation} \tag{2.10} $$

If $m=1$, then the second term in (2.8) vanishes and the third is a polynomial of degree at most $2n+1$. On the other hand, if $m\geqslant2$, then the second term in (2.8) is a polynomial of degree at most $4m-5\leqslant2n+2m-1$ and the third is a polynomial of degree at most $2n+2m-1$. Hence (see [24], Ch. 3, § 4) for $m\geqslant1$ the divided differences of these terms vanish.

Thus, from (2.7), (2.8) and (2.10) we obtain

$$ \begin{equation*} \begin{aligned} \, 0&=\Lambda (\varphi)=(-1)^{n+m}\int_{-1}^{1}\frac{Q(it)\,d\mu^-(t)} {\prod_{k=1}^{n+m+1}(t^2+x_k^2)} \\ &= (-1)^{n+m}\int_{-1}^{1}\frac{ q^2(-t^2)\,d\mu^-(t)}{\prod_{k=1}^{n+m+1}(t^2+x_k^2)}\ne0. \end{aligned} \end{equation*} \notag $$

This contradiction shows that (2.5) is a Chebyshev system on $\Omega_{n+m+1}$.

Lemma 1 is proved.

Theorem 2. Let $n,m \in \mathbb{N}_0$, and let $n\,{\geqslant}\, m-1$. If $R_{n+m,2m} = {P_{n+m}}/{Q_{2m}}$, where $P_{n+m}\in {\mathcal P}_{n+m}$, $Q_{2m}\in {\mathcal P}_{2m}$, and if $xR_{n+m,2m}(x^2)$ interpolates $f^-(x)$ at the points $x_k\in\Omega_{n+m+1}$, then the following equality holds:

$$ \begin{equation*} f^-(x) - xR_{n+m,2m}(x^2)=xr(x^2) \int_0^1\frac{d\mu(t)}{r(-t^2)(t^2+x^2)}, \qquad x\in[0,1], \end{equation*} \notag $$
where $r(\cdot)={\omega_{n+m+1}(\cdot)}/{Q_{2m}(\cdot)}\in{\mathcal R}_{n+m+1,2m}$.

Proof. For each polynomial $Q_{4m}\not\equiv 0$, by Hermite’s interpolation formula (see [25], Ch. 9, § 11) there exists a unique polynomial $P_{2n+2m+1}\in {\mathcal P}_{2n+2m+1}$ such that
$$ \begin{equation} f^-(z)Q_{4m}(z)-P_{2n+2m+1}(z)=0, \qquad z=\pm x_k, \quad x_k\in\Omega_{n+m+1}. \end{equation} \tag{2.11} $$

In addition, the following equality holds for $z\in\mathbb{C}\setminus[-i,i]$:

$$ \begin{equation*} f^-(z)Q_{4m}(z)-P_{2n+2m+1}(z)=\frac{\omega_{n+m+1}(z^2)}{2\pi i}\int_{\partial G}\frac{f^-(\xi)Q_{4m}(\xi)\,d\xi}{\omega_{n+m+1}(\xi^2)(\xi-z)}, \end{equation*} \notag $$
where $G$ is a bounded domain such that $\{z, \pm x_1, \pm x_2,\dots,\pm x_{n+m+1}\}\subset G$, $[-i,i]\cap \overline{G} = \varnothing$ and its boundary $\partial G$ is rectifiable.

Using Fubini’s formula we obtain

$$ \begin{equation*} \begin{aligned} \, &f^-(z)Q_{4m}(z)-P_{2n+2m+1}(z) \\ &\qquad=\frac{\omega_{n+m+1}(z^2)}{4\pi i}\int_{-1}^{1}\int_{\partial G}\frac{Q_{4m}(\xi)\,d\xi}{\omega_{n+m+1}(\xi^2)(\xi-z)(\xi-it)}\,d\mu^{-}(t). \end{aligned} \end{equation*} \notag $$

Let $I$ denote the inner integral in this formula and $F$ denote the integrand. We calculate $I$ using residues:

$$ \begin{equation*} I=-2\pi i\Bigl(\operatorname*{Res}_{\infty}F(\xi) +\operatorname*{Res}_{it}F(\xi)\Bigr) = \frac{2\pi i Q_{4m}(it)}{\omega_{n+m+1}(-t^2)(z-it)}. \end{equation*} \notag $$
Here we take into account that the residue at infinity is zero because $F$ has a zero of order at least two at infinity.

Thus,

$$ \begin{equation} f^-(z)Q_{4m}(z)-P_{2n+2m+1}(z)=\frac{\omega_{n+m+1}(z^2)}{2}\int_{-1}^{1}\frac{ Q_{4m}(it)\,d\mu^{-}(t)}{\omega_{n+m+1}(-t^2)(z-it)}. \end{equation} \tag{2.12} $$

Now set $Q_{4m}(z)=Q_{2m}(z^2)$. Since the Markov function $f^-$ and the Cauchy-type integral are odd, it follows from (2.12) that $P_{2n+2m+1}(z)=zP_{n+m}(z^2)$. Then (2.12) assumes the form

$$ \begin{equation*} f^-(z) - \frac{zP_{n+m}(z^2)}{Q_{2m}(z^2)} = \frac{z\omega_{n+m+1}(z^2)}{Q_{2m}(z^2)}\int_{0}^{1}\frac{Q_{2m}(-t^2)\,d\mu(t)}{\omega_{n+m+1}(-t^2)(z^2+t^2)}. \end{equation*} \notag $$
Theorem 2 is proved.

Theorem 3. Let $n,m\in\mathbb{N}_0, n\geqslant m-1$. Then there exists a unique rational function

$$ \begin{equation*} R_{n,m}\in{\mathcal R}_{n,m}, \qquad R_{n,m}=\frac{P_n}{Q_m}, \qquad Q_m(z)=z^m+a_{m-1}z^{m-1}+\dots+a_0, \end{equation*} \notag $$
such that

(a) for all $x_k\in \Omega_{n+m+1}$

$$ \begin{equation*} f^-(x_k) - x_kR_{n,m}(x_k^2) = 0; \end{equation*} \notag $$

(b) $Q_m$ is the $m$th orthogonal polynomial on $[-1,0]$ with respect to the measure

$$ \begin{equation*} d\nu = \frac{d\mu(\sqrt{-\xi})}{\omega_{n+m+1}(\xi)}; \end{equation*} \notag $$

(c) all the zeros of $Q_m$ are simple and lie in $(-1,0)$ (the case $m\geqslant1$);

(d) the equality

$$ \begin{equation*} f^-(x) - xR_{n,m}(x^2) = \frac{x\omega_{n+m+1}(x^2)}{Q_m^2(x^2)}\int_{0}^{1}\frac{Q_m^2(-t^2)\,d\mu(t)} {\omega_{n+m+1}(-t^2)(t^2+x^2)}, \qquad x\in[-1,1], \end{equation*} \notag $$
holds.

Proof. (a) First consider $m\!\geqslant\!1$. By Lemma 1 system (2.5) is Chebyshev on $\Omega_{n+m+1}$. By the interpolation theorem for Chebyshev systems (see [25], Ch. 1, § 2) there exist unique polynomials $q_{m-1}\in{\mathcal P}_{m-1}$ and $p_{2n+1}(x)=xP_n(x^2)$ such that
$$ \begin{equation*} q_{m-1}(x_k^2)f^-(x_k) - x_kP_n(x_k^2) = -x_k^{2m}f^-(x_k), \qquad x_k\in\Omega_{n+m+1}. \end{equation*} \notag $$
It remains to set $Q_m(z)=z^m+q_{m-1}(z)$. For $m=0$ the argument is similar, provided that we set $q_{-1}(x)\equiv0$.

(b) For an arbitrary polynomial $q\in {\mathcal P}_{m-1}$ consider the function

$$ \begin{equation*} \psi(x) = q(x^2)\bigl(Q_m(x^2)f^-(x)-xP_n(x^2)\bigr). \end{equation*} \notag $$

From (a), as $f^-$ is odd, we obtain

$$ \begin{equation} \psi(x)=0 \qquad\text{for}\quad x=\pm x_k, \quad x_k\in\Omega_{n+m+1}. \end{equation} \tag{2.13} $$

We represent $\psi$ in the form

$$ \begin{equation} \begin{aligned} \, \notag \psi(x) &= \frac{1}{2}\int_{-1}^1\frac{Q_m(-t^2)q(-t^2)}{x-it}\,d\mu^{-}(t) \\ &\qquad +\,\frac{1}{2}\int_{-1}^{1}\frac{Q_m(x^2)q(x^2)-Q_m(-t^2)q(-t^2)}{x-it}\,d\mu^{-}(t) - xq(x^2)P_n(x^2). \end{aligned} \end{equation} \tag{2.14} $$

Let $\psi_k$, $k=1,2,3$, denote the corresponding terms on the right-hand side of (2.14).

It follows from (2.9) that

$$ \begin{equation*} \Lambda\biggl(\frac{1}{x+it}\biggr) = \Lambda\biggl(\frac{1}{x-it}\biggr)=-\frac{1}{\omega_{n+m+1}(-t^2)}. \end{equation*} \notag $$

Hence for $\psi_1$ we obtain

$$ \begin{equation} \begin{aligned} \, \notag \Lambda(\psi_1) &= \frac{1}{2}\int_0^1 Q_m(-t^2)q(-t^2)\biggl(\Lambda\biggl(\frac{1}{x-it}\biggr)+ \Lambda\biggl(\frac{1}{x+it}\biggr)\biggr)d\mu(t) \\ & =-\int_{0}^1\frac{Q_m(-t^2)q(-t^2)\,d\mu(t)}{\omega_{n+m+1}(-t^2)}= \int_{-1}^{0}\frac{Q_m(\xi)q(\xi)}{\omega_{n+m+1}(\xi)}\,d\mu(\sqrt{-\xi}). \end{aligned} \end{equation} \tag{2.15} $$
Here we take the principal value of the square root.

Noticing that $\psi_2\in {\mathcal P}_{4m-3}$, $n\geqslant m-1$, and $\psi_3\in {\mathcal P}_{2n+2m-1}$, by the properties of divided differences (see [24], Ch. 3, § 4) we have

$$ \begin{equation} \Lambda(\psi_2) = \Lambda(\psi_3) = 0. \end{equation} \tag{2.16} $$

From (2.13)(2.16) we obtain

$$ \begin{equation*} 0=\Lambda(\psi) = \int_{-1}^{0}\frac{Q_m(\xi)q(\xi)}{\omega_{n+m+1}(\xi)}\,d\mu(\sqrt{-\xi}). \end{equation*} \notag $$

Since $q$ is an arbitrary polynomial of degree at most $m-1$, $Q_m$ is the $m$th orthogonal polynomial on $[-1,0]$ with respect to the measure

$$ \begin{equation*} d\nu = \frac{d\mu(\sqrt{-\xi})}{\omega_{n+m+1}(\xi)}. \end{equation*} \notag $$

(c) This follows from the properties of orthogonal polynomials.

(d) In the integral representation from Theorem 2 set $Q_{2m}=Q_m^2$ and ${P_{n+m} = P_nQ_m}$.

Theorem 3 is proved.

Lemma 2. For $m\in\mathbb{N}_0$ let $Q$ be the $m$th orthogonal polynomial on $[-1,0]$ with respect to a positive measure $d\nu$. Then the following inequality holds for each nontrivial polynomial $q\in{\mathcal P}_{2m}$ such that $q(x)\geqslant 0$ for $x\in [-1,0)$ and $q(x)>0$ for ${x\in[0,1]}$:

$$ \begin{equation*} \biggl|\frac{1}{Q^2(x^2)} \int_{-1}^{0}\frac{Q^2(t)\,d\nu(t)}{t-x^2}\biggr|\leqslant \biggl| \frac{1}{q(x^2)}\int_{-1}^{0}\frac{q(t)}{t-x^2}\,d\nu(t)\biggr|, \qquad x\in[0,1]. \end{equation*} \notag $$

Proof.The case $m=0$ is obvious, so assume that $m\geqslant1$. Consider the function
$$ \begin{equation} p(t) =\frac{Q^2(t)q(x^2)-Q^2(x^2)q(t)}{t-x^2}. \end{equation} \tag{2.17} $$

Since $p\in{\mathcal P}_{2m-1}$, by Gauss’s interpolation formula (see [26], Ch. 9, § 2)

$$ \begin{equation} \int_{-1}^{0}p(t)\,d\nu(t) = \sum_{k=1}^m \lambda_k p(t_k), \end{equation} \tag{2.18} $$
where the $t_k\in(-1,0)$ are the zeros of $Q$, and the $\lambda_k>0$ are the Christoffel coefficients.

From (2.17) and (2.18) we obtain

$$ \begin{equation*} \frac{1}{Q^2(x^2)}\int_{-1}^{0}\frac{Q^2(t)\,d\nu(t)}{t-x^2}= \frac{1}{q(x^2)}\int_{-1}^0\frac{q(t)\,d\nu(t)}{t-x^2}-\frac{1}{q(x^2)} \sum_{k=1}^m\lambda_k\frac{q(t_k)}{t_k-x^2}. \end{equation*} \notag $$

For fixed $x\in[0,1]$ the left-hand side, as well as the minuend and subtrahend on the right are nonpositive, so we arrive at the required result.

Lemma 2 is proved.

Lemma 3. If $n, m\in\mathbb{N}_0$, where $ n\geqslant m-1$, then

$$ \begin{equation} E_{2n+1,2m}(f^-;[-1,1]) = \min\biggl\|\sqrt{\cdot\,}\, r(\cdot)\int_{-1}^0\frac{d\mu(\sqrt{-\tau})}{r(\tau)(\cdot-\tau)}\biggr\|_{[0,1]}, \end{equation} \tag{2.19} $$
where the minimum is taken over all rational $r\in{\mathcal R}_{n+m+1,2m}$ such that $r(\tau)>0$ for $\tau\in[-1,0]$ and the numerator of $r$ is positive on $[0,1]$.

Proof. By the definition of the best rational approximation we have
$$ \begin{equation} E_{2n+1,2m}(f^-;[-1,1])\leqslant\|f^-(x)-xR_{n,m}(x^2)\|_{[0,1]}, \end{equation} \tag{2.20} $$
where $R_{n,m}={P_n}/{Q_m}$ is the function provided by Theorem 3. By part (d) of Theorem 3 we have
$$ \begin{equation} \|f^-(x)-xR_{n,m}(x^2)\|_{[0,1]} = \biggl\|\frac{x\omega_{n+m+1}(x^2)}{Q_m^2(x^2)}\int_{-1}^0 \frac{Q_m^2(\tau)\,d\mu(\sqrt{-\tau})}{\omega_{n+m+1}(\tau)(\tau-x^2)}\biggr\|_{[0,1]}. \end{equation} \tag{2.21} $$

It follows from Lemma 2 that when $Q_m^2$ on the right-hand side of (2.21) is replaced by another polynomial $q$ mentioned in that lemma, the corresponding norm does not decrease. Hence relations (2.20) and (2.21) yields the upper bound in (2.19).

We prove the lower bound. Let $R_{n,m}^{*}={p^*}/{q^*}$ be a fraction such that

$$ \begin{equation*} \|f^-(x)-xR_{n,m}^{*}(x^2)\|_{[0,1]} = E_{2n+1,2m}(f^-;[-1,1]). \end{equation*} \notag $$
It exists by Theorem 2.9 in [9], Ch. 7, § 2. We can assume without loss of generality that $q^*(x)>0$ for $x\in[-1,1]$. Set ${Q^*(x)=(q^*(x))^2}$ and consider the quantity
$$ \begin{equation} \rho_{n+m}=\inf_{P\in{\mathcal P}_{2n+2m+1}}\biggl\|f^-(x)-\frac{P(x)}{Q^*(x^2)}\biggr\|_{[-1,1]}. \end{equation} \tag{2.22} $$

Since $Q^*(x^2)>0$ for $x\in[-1,1]$, the system

$$ \begin{equation} \biggl\{\frac{x^k}{Q^*(x^2)}\biggr\}_{k=0}^{2n+2m+1} \end{equation} \tag{2.23} $$
is Chebyshev on $[-1,1]$. Hence there exists a unique polynomial $P^*\in{\mathcal P}_{2n+2m+1}$ delivering the infimum in (2.22). The function $f^-(x)$ is odd, while $Q^*(x^2)$ is even. Since $P^*(x)$ is unique, it is odd, that is, $P^*(x)=xU^*(x^2)$, where $U^*\in{\mathcal P}_{n+m}$.

Since (2.23) is a Chebyshev system, by Chebyshev’s alternance theorem (see [25], Ch. 1, § 2) the rational function $x{U^*(x^2)}/{Q^*(x^2)}$ interpolates $f^-(x)$ at least at ${2n+2m+2}$ points. Points of interpolation are positioned symmetrically relative to the origin, and 0 is one of them. Hence we have at least $2n+2m+3$ points of interpolation

$$ \begin{equation*} -1\leqslant-x_{n+m+1}<-x_{n+m}<\dots<-x_1<0<x_1<x_2<\dots<x_{n+m+1}\leqslant1. \end{equation*} \notag $$

By Theorem 2 we have

$$ \begin{equation} f^-(x)-\frac{xU^*(x^2)}{Q^*(x^2)} = x r(x^2)\int_0^1\frac{d\mu(t)}{r(-t^2)(t^2+x^2)}, \qquad x\in[-1,1], \end{equation} \tag{2.24} $$
where $r(\tau)=\omega_{n+m+1}(\tau)/(q^*(\tau))^2$.

In view of (2.22)

$$ \begin{equation*} \rho_{n+m}\leqslant E_{2n+1,2m}(f^-;[-1,1]), \end{equation*} \notag $$
and therefore (2.24) implies the lower bound in (2.19).

Lemma 3 is proved.

Let $k\in\mathbb N$, and let $z_j$, $j=1,2,\dots,k$, be points in the half-plane $\operatorname{Re} z>0$. Then the rational function

$$ \begin{equation} b_k(z)=\prod_{j=1}^{k}\frac{z-z_j}{z+\overline{z_j}} \end{equation} \tag{2.25} $$
is called the Blaschke product of order $k$ for the half-plane $\operatorname{Re} z>0$ with zeros at $z_1,z_2,\dots,z_k$.

Lemma 4 (see [5] and [27]). For all $\alpha>0$ and $k\in\mathbb{N}$ there exists a Blaschke product (2.25) with all zeros on $(0,1]$ such that

$$ \begin{equation} x^{\alpha}b_k^2(x)\leqslant c(\alpha) \exp(-\pi\sqrt{2\alpha k}), \qquad x\in[0,1]. \end{equation} \tag{2.26} $$

Let $b_k(z)$ be a Blaschke product of the form (2.25) all of whose zeros lie on $(0,1]$. Then $F(\omega) \!=\! (b_k^2(\omega)+b_k^2(-\omega))/2$ is an even rational function of degree $4k$. Therefore,

$$ \begin{equation} r_{2k}(z) = F(i\sqrt{z}) \end{equation} \tag{2.27} $$
is a rational function of degree $2k$. For convenience assume that $\sqrt{z}$ is considered in the domain $\mathbb{C}\setminus[0,-i\infty)$, and the branch of $\sqrt{\,\cdot\,}$ is taken so that $\sqrt{\tau}>0$ for $\tau\in(0,+\infty)$.

The functions $F(\omega)$ and $r_{2k}(z)$ are examples of Chebyshev–Markov rational functions; see [28]. Note the following properties of $r_{2k}(z)$: all the poles of $r_{2k}$ lie in $[-1,0)$ and have an even order, the denominator of $r_{2k}$ is positive on $\mathbb{R}$; $r_{2k}(\tau)\geqslant1$ for $\tau\leqslant0$; $\|r_{2k}\|_{[0,+\infty]}=1$.

Lemma 5. For any $\alpha>0$ and $k\in\mathbb{N}$ there exists a rational Chebyshev–Markov function of the form (2.27) such that

$$ \begin{equation*} (-\tau)^{{\alpha}/{2}}r_{2k}^{-1}(\tau)\leqslant c_1(\alpha) \exp(-\pi\sqrt{2\alpha k}), \qquad \tau\in[-1,0]. \end{equation*} \notag $$

Proof. Let $r_{2k}(z)$ be a rational function of the form (2.27). Then for $\tau\in[-1,0]$, bearing in mind the convention on the choice of the branch of $\sqrt{\,\cdot\,}$ (see above) we obtain
$$ \begin{equation*} r_{2k}^{-1}(\tau) = \frac{2}{b_k^2(i\sqrt{\tau})+b_k^2(-i\sqrt{\tau})}\leqslant \frac{2}{b_k^2(-\sqrt{-\tau})}=2b_k^2(\sqrt{-\tau}). \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} \begin{aligned} \, 0 &\leqslant(-\tau)^{{\alpha}/{2}}r_{2k}^{-1}(\tau) \leqslant 2(-\tau)^{{\alpha}/{2}}b_k^2(\sqrt{-\tau})= \biggl[ \begin{array}{c} x=\sqrt{-\tau}\\ x\in[0,1] \end{array}\biggr] \\ &= 2x^{\alpha}b_k^2(x)\leqslant 2c(\alpha) \exp(-\pi\sqrt{2\alpha k}). \end{aligned} \end{equation*} \notag $$
The last inequality holds if the Blaschke product from Lemma 4 is taken as $b_k(x)$. This completes the proof.

Theorem 4. For $\alpha>0$ let $\mu$ be an absolutely continuous function on $[0,1]$ such that

$$ \begin{equation*} \mu(0)=0\quad\textit{and}\quad \mu'(t) \asymp t^{\alpha}\quad\textit{for } t\in(0,1]. \end{equation*} \notag $$
Then the function $f^{-}$ of the form (2.2) satisfies
$$ \begin{equation} R_{n}(f^{-};[-1,1])\asymp \exp(-\pi\sqrt{\alpha n}), \qquad n\in\mathbb{N}, \end{equation} \tag{2.28} $$
where the positive quantities implicit in $\asymp$ only depend on $\mu$.

Proof of the upper bound in (2.28). It will be convenient to assume that $n$ in (2.28) is odd and to prove (2.28) for $2n+1$ in place of $n$. Let $r_{2n}$ be the Chebyshev–Markov rational function from Lemma 5. By the properties of $r_{2n}$ listed above and Lemma 3 we have
$$ \begin{equation*} R_{2n+1}(f^-;[-1,1])\leqslant c_2(\mu)\biggl\|\sqrt{x}\int_{-1}^0 \frac{(-\tau)^{(\alpha-1)/2}r_{2n}^{-1}(\tau)}{x-\tau}\,d\tau \biggr\|_{[0,1]}. \end{equation*} \notag $$

Set $\tau_n=-\exp(-2\pi\sqrt{{2n}/{\alpha}})$. Then using the properties of $r_{2n}$ and Lemma 5 we obtain

$$ \begin{equation*} \begin{aligned} \, 0 &\leqslant\sqrt{x}\int_{-1}^0\frac{(-\tau)^{(\alpha-1)/2}r_{2n}^{-1}(\tau)}{x-\tau}\,d\tau \\ &\leqslant c_1(\alpha)\exp(-\pi\sqrt{2\alpha n})\int_{-1}^{\tau_n}\frac{\sqrt{x}\,d\tau}{\sqrt{-\tau}(x-\tau)} + \int_{\tau_n}^0\frac{\sqrt{x}(-\tau)^{(\alpha-1)/2}}{x-\tau}\,d\tau. \end{aligned} \end{equation*} \notag $$
We denote the last two estimates by $I_{1n}(x)$ and $I_{2n}(x)$, respectively. In the derivation of estimates below we can assume that $x\in(0,1]$ and $\tau\in[-1,0)$. We have
$$ \begin{equation*} \begin{aligned} \, 0<I_{1n}(x) &=-2\int_{-1}^{\tau_n}\frac{\sqrt{x}\,d(\sqrt{-\tau})}{x-\tau}= \Biggl[ \begin{array}{c} y=\sqrt{-\tau}\\ \tau=-y^2\\ \delta_n=\sqrt{-\tau_n} \end{array}\Biggr]= 2\int_{\delta_n}^{1}\frac{\sqrt{x}\,dy}{y^2+(\sqrt{x})^2} \\ & =2\arctan \frac{y}{\sqrt{x}}\bigg|_{y=\delta_n}^{y=1} = 2\biggl(\arctan\frac{1}{\sqrt{x}}-\arctan\frac{\delta_n}{\sqrt{x}}\biggr)<\pi. \end{aligned} \end{equation*} \notag $$
For $I_{2n}(x)$ we have the estimate
$$ \begin{equation*} I_{2n}(x)\leqslant \int_{\tau_n}^0\frac{(-\tau)^{(\alpha-1)/2}}{\sqrt{x-\tau}}\,d\tau \leqslant \int_{\tau_n}^0\!(-\tau)^{{\alpha}/{2}-1}\,d\tau =\frac{2}{\alpha}(-\tau_n)^{{\alpha}/{2}}=\frac{2}{\alpha}\exp\bigl(-\pi\sqrt{2\alpha n}\bigr). \end{equation*} \notag $$

Combining the above inequalities we obtain the upper bound from (2.28).

The proof of the lower bound in (2.28) is based on a result due to Andersson (see [20], Theorem 4). We present it in a form convenient for our purposes. We let $L^p[0,1]$, $1<p<\infty$, denote the Lebesgue space of real functions on $[0,1]$, endowed with the standard norm; $R_n(f;L^p[0,1])$ is the best approximation of the function $f$ in $L^p[0,1]$ from the set ${\mathcal R}_{n,n}$.

Theorem 5 (see [20]). Let $1<p<\infty$, $\alpha>-{1}/{p}$, let $\nu$ is an increasing absolutely continuous function on $[-1,0]$, let $\nu(0)=0$, $\nu'(t)\asymp|t|^{\alpha}$ for $t\in[-1,0)$, and let

$$ \begin{equation*} \widehat{\nu}(x) = \int_{-1}^0\frac{d\nu(t)}{t-x}, \qquad x\in[0,1], \end{equation*} \notag $$
be the corresponding Markov function. Then $\widehat{\nu}\in L^p[0,1]$ and
$$ \begin{equation*} R_n(\widehat{\nu};L^p[0,1])\asymp n^{{1}/(2p)} \exp\biggl(-2\pi\sqrt{n\biggl(\alpha+\frac{1}{p}\biggr)}\biggr) \quad \textit{for } n\in\mathbb{N}. \end{equation*} \notag $$
Moreover, for $d\nu(t)=|t|^{\alpha}\,dt$, $t\in[-1,0]$, the quantities implicit in $\asymp$ are positive and depend continuously on $(p,\alpha)$ on the indicated set.

We only need the lower bound from Theorem 5.

Proof of the lower bound in (2.28). For $n\in\mathbb{N}\setminus\{1\}$ let $r_n^*\in{\mathcal R}_n$ be the best approximant to $f^-$, that is,
$$ \begin{equation*} R_n(f^-;C[-1,1])=\|f^{-}-r_n^*\|_{[-1,1]}. \end{equation*} \notag $$
Let $m=[{n}/{2}]$ be the integer part of ${n}/{2}$. Since $f^-$ is odd, $r_n^*$ is too, so that $r_n^*(x)=x u_m(x^2)$, where $u_m\in{\mathcal R}_m\cap C[0,1]$. Therefore,
$$ \begin{equation*} \begin{aligned} \, R_n(f^-;C[-1,1]) &=\|f^-(x) - xu_m(x^2)\|_{[-1,1]}=\|f^-(x)-xu_m(x^2)\|_{[0,1]} \\ &\geqslant \sqrt{x}\, \biggl|\frac{f^-(\sqrt{x})}{\sqrt{x}}-u_m(x)\biggr|, \qquad x\in(0,1]. \end{aligned} \end{equation*} \notag $$

We set $\rho_n=R_n(f^-;C[-1,1])$ for short. Then

$$ \begin{equation} \frac{\rho_n}{\sqrt{x}}\geqslant \biggl|\frac{f^-(\sqrt{x})}{\sqrt{x}}-u_m(x)\biggr|, \qquad x\in(0,1]. \end{equation} \tag{2.29} $$

However, for $x\in(0,1]$ we have

$$ \begin{equation*} \frac{f^-(\sqrt{x})}{\sqrt{x}} = \int_0^1\frac{d\mu(t)}{t^2+x} = \int_{-1}^0\frac{d\mu(\sqrt{-\tau})}{\tau-x}, \end{equation*} \notag $$
that is, $-{f^-(\sqrt{x})}/{\sqrt{x}}$ is the Markov function of the measure $\nu(\tau)=-\mu(\sqrt{-\tau})$. Since $\mu'(t)\asymp t^{\alpha}$, $t\in(0,1]$, it follows that
$$ \begin{equation} \nu'(\tau)\asymp|\tau|^{(\alpha-1)/2}, \qquad \tau\in[-1,0). \end{equation} \tag{2.30} $$

Set $p_m=2-{1}/(2\sqrt{m})$, $m\in\mathbb{N}$. Then ${3}/{2}\leqslant p_m<2$, and we find from (2.29) that

$$ \begin{equation*} \biggl(\int_0^1\biggl(\frac{\rho_n}{\sqrt{x}}\biggr)^{p_m}\,dx\biggr)^{1/p_m}\geqslant \biggl(\int_0^1\biggl|\frac{f^-(\sqrt{x})}{\sqrt{x}}-u_m(x)\biggr|^{p_m}dx\biggr)^{1/p_m}. \end{equation*} \notag $$

Therefore,

$$ \begin{equation} \rho_n\geqslant m^{-{1}/(2p_m)}R_m(\widehat{\nu};L_{p_m}[0,1]). \end{equation} \tag{2.31} $$
From (2.30), (2.31) and Theorem 5 we obtain
$$ \begin{equation*} \rho_n\geqslant c_1(p_m,\alpha) \exp\biggl(-2\pi\sqrt{\biggl(\frac{\alpha-1}{2}+\frac{1}{p_m}\biggr)m}\biggr) \geqslant c_2(\alpha)e^{-\pi\sqrt{\alpha n}}. \end{equation*} \notag $$
This proves the lower bound in (2.28).

Remark 1. In view of Theorem 5 and the above arguments we can conclude that for $d\mu=t^{\alpha}\,dt$ the quantities implicit in the symbol $\asymp$ in (2.28) can be taken to be positive and continuous in $\alpha\in(0,+\infty)$.

2.3. Approximation of $g^+$

Let $g^+$ be the function in (2.3). Then

$$ \begin{equation} E_{2n,2m}(g^+(x);[-1,1])=E_{n,m}(g^+(\sqrt{x});[0,1]) \end{equation} \tag{2.32} $$
by (2.4).

However, for $x\in[0,1]$ we have

$$ \begin{equation} g^+(\sqrt{x})=\int_0^1\frac{t\,d\mu(t)}{t^2+x}= \biggl[ \begin{array}{c} t^2=-\tau,\ \tau\in[-1,0],\\ t=\sqrt{-\tau} \end{array}\biggr] =\int_{-1}^0\frac{\sqrt{-\tau}\,d\mu(\sqrt{-\tau})}{\tau-x}. \end{equation} \tag{2.33} $$

It follows from (2.33) that $-g^+(\sqrt{x})$ is the Markov function for the measure $\nu$ such that $d\nu=-\sqrt{-\tau}\,d\mu(\sqrt{-\tau})$. By (2.3), (2.32) and Lemma 3 in [22] we have the following result.

Lemma 6. Let $g^+$ be the function in (2.3), and let $n\geqslant m-1$. Then

$$ \begin{equation} E_{2n,2m}(g^+;[-1,1])=\min\biggl\|r(\cdot)\int_{-1}^0 \frac{\sqrt{-\tau}\,d\mu(\sqrt{-\tau})}{r(\tau)(\tau-\cdot)}\biggr\|_{[0,1]}, \end{equation} \tag{2.34} $$
where the minimum is taken over all $r\in{\mathcal R}_{n+m+1,2m}$ such that $r(\tau)>0$ for ${\tau\in[-1,0]}$ and the denominator $r$ is nonnegative on $\mathbb{R}$.

Theorem 6. For $\alpha>0$ let $\mu$ be an absolutely continuous function on $[0,1]$ such that $\mu(0)= 0$ and

$$ \begin{equation*} \mu'(t) \asymp t^{\alpha}, \qquad t\in(0,1]. \end{equation*} \notag $$
Then the following weak asymptotic formula hold for the function $g^+$ in (2.3):
$$ \begin{equation*} R_{n}(g^+;[-1,1])\asymp \exp(-\pi\sqrt{\alpha n}), \qquad n\in\mathbb{N}, \end{equation*} \notag $$
where the positive quantities implicit in $\asymp$ only depend on $\mu$.

Theorem 6 reduces to Theorem 2 in [22] (namely, to formula (1.4) there), whose proof is in its turn based on a lemma analogous to Lemma 6 in our case. Note that in [22], in the analogue of the right-hand side of (2.34) we see the intervals $[-1,1]$ and $[1,a]$, where $a>1$, in place of $[0,1]$ and $[-1,0]$. Hence we must set $a=3$ and reduce Theorem 6 to the case of the intervals $[-1,1]$ and $[1,3]$ by means of a linear fractional transformation.

Remark 2. Note that the main results in $\S\,2$ also holds for functions $f^-$ and $g^+$ of the form (2.2) and (2.3), respectively, with integrals over $[-a,a]$, $a>0$, in place of $[-1,1]$.

§ 3. Approximation of the odd extension of a power function

3.1. The proof of the upper bound in Theorem 1

Let $\alpha>0$, where $(\alpha+ 1)/2\notin\mathbb{N}$. Consider the function

$$ \begin{equation*} g(z) = \begin{cases} z^{\alpha} &\text{for } \operatorname{Re} z >0, \\ -(-z)^{\alpha}&\text{for }\operatorname{Re} z <0, \end{cases} \end{equation*} \notag $$
where the branches of power functions are chosen so that $x^{\alpha}>0$ for $x\in(0,+\infty)$.

For $\rho>1$ let $\mathcal E_{\rho}$ denote the ellipse in $\mathbb{C}$ with foci at $\pm1$ and sum of semiaxes $\rho$, that is,

$$ \begin{equation*} \mathcal E_{\rho}=\biggl\{z\colon z=\frac{1}{2}\biggl(\rho e^{i\varphi}+\frac{1}{\rho}e^{-i\varphi}\biggr),\ 0\leqslant\varphi<2\pi\biggr\}. \end{equation*} \notag $$

Set $D= \operatorname{int}\mathcal E_{\sqrt{2}+1}$, so that $D$ is an open elliptic domain with boundary $\mathcal E_{\sqrt{2}+1}$, let $D^+$ and $D^-$ be the right and left halves of $D$, respectively; let $\partial D$, $\partial D^+$ and $\partial D^-$ be the boundaries of $D$, $D^+$ and $D^-$, respectively. All boundaries are assumed to be positively oriented.

By Cauchy’s integral formula we have

$$ \begin{equation*} (\pm z)^{\alpha} = \frac{1}{2\pi i}\int_{\partial D^{\pm}}\frac{(\pm\xi)^{\alpha}\,d\xi}{\xi-z}, \qquad z\in D^{\pm}. \end{equation*} \notag $$

On the other hand, if $z\notin D^{\pm}\cup\partial D^{\pm}$, then these integrals vanish by Cauchy’s theorem. Hence

$$ \begin{equation} g(x)=\frac{1}{2\pi i}\biggl(\int_{\partial D^+}\frac{(+\xi)^{\alpha}}{\xi-x}\,d\xi - \int_{\partial D^-}\frac{(-\xi)^{\alpha}}{\xi-x}\,d\xi \biggr), \qquad x\in[-1,1]\setminus\{0\}. \end{equation} \tag{3.1} $$

Now we let $\lambda(x)$ denote the Cauchy-type integral of the function $g(\xi)$ along the ellipse $\partial D = \mathcal E_{\sqrt{2}+1}$. We denote the parts of the boundaries of $D^+$ and $D^-$ lying on the imaginary axis, with the relevant orientations on them, by $[i,-i]^+$ and $[-i,i]^-$, respectively. Taking account of the above we can write (3.1) for $x\in[-1,1]\setminus\{0\}$ as

$$ \begin{equation} g(x)=\lambda(x)+\frac{1}{2\pi i}\biggl(\int_{[i,-i]^+}\frac{(+\xi)^{\alpha}}{\xi-x}\,d\xi - \int_{[-i,i]^-}\frac{(-\xi)^{\alpha}}{\xi-x}\,d\xi \biggr), \qquad x\in[-1,1]\setminus\{0\}. \end{equation} \tag{3.2} $$

Making the change $\xi=iy$, $y\in[-1,1]$, and taking the appropriate branches of $(+iy)^{\alpha}$ and $(-iy)^{\alpha}$ we transform (3.2) into

$$ \begin{equation} g(x) = \lambda(x)+\frac{2}{\pi}\cos\frac{\pi\alpha}{2}\cdot f^-(x), \qquad x\in[-1,1], \end{equation} \tag{3.3} $$
where $\displaystyle f^-(x)=x\int_0^1\dfrac{y^{\alpha}\,dy}{x^2+y^2}$ is a function of the form (2.2). (We have included the point $x=0$ in (3.3) because all functions in (3.3) are continuous at this point.)

We need the following result due to Bernstein (see [29]). Let $h$ be an analytic function in $\operatorname{int}\mathcal E_{\rho}$, $\rho>1$. Then for each $1<q<\rho$ there exists $c(h,q)>0$ such that

$$ \begin{equation} E_m(h;[-1,1])\leqslant c(h,q)q^{-m},\qquad m\in\mathbb{N}. \end{equation} \tag{3.4} $$

Proof of the upper bound in Theorem 1. Let $n,m\in\mathbb{N}$ satisfy $n>m\geqslant1$. Then from (3.3) we obtain
$$ \begin{equation} R_n(g;[-1,1])\leqslant E_m(\lambda;[-1,1]) + \frac{2}{\pi}\biggl|\cos\frac{\pi\alpha}{2}\biggr|R_{n-m}(f^-;[-1,1]). \end{equation} \tag{3.5} $$
The function $\lambda$ is analytic in $D=\operatorname{int}\mathcal E_{\sqrt{2}+1}$, and $\sqrt{2}+1>{12}/{5}$. Hence the following inequality holds by (3.4):
$$ \begin{equation} E_m(\lambda;[-1,1])\leqslant c_1(\alpha)\biggl(\frac{5}{12}\biggr)^m. \end{equation} \tag{3.6} $$

Using now Theorem 4 we find that

$$ \begin{equation} R_{n-m}(f^-;[-1,1])\leqslant c_2(\alpha) \exp\bigl(-\pi\sqrt{\alpha(n-m)}\bigr). \end{equation} \tag{3.7} $$

Given $n\in\mathbb{N}$, set $m=m_n=[{\pi\sqrt{\alpha n}}/{\log(12/5)}]$, where brackets $[\,{\cdot}\,]$ denote the integer part of the number in question. For each $\alpha > 0$ there exists $n(\alpha) \in \mathbb{N}$ such that all $n\geqslant n(\alpha)$ satisfy $n>m_n\geqslant1$. Hence from (3.5), (3.6) and (3.7) we deduce the upper bound of Theorem 1 for $n\geqslant n(\alpha)$. This completes the proof of the upper bound in Theorem 1.

It was noted in § 1 that the lower bound in Theorem 1 is due to Vyacheslavov [7]. Here we give another proof of it, which is based on Theorem 4.

Proof of the lower bound in Theorem 1. From (3.3) we see that for any $n,m\in\mathbb{N}$ we have
$$ \begin{equation*} R_n(g;[-1,1])\geqslant -E_m(\lambda;[-1,1]) + \frac{2}{\pi}\biggl|\cos\frac{\pi\alpha}{2}\biggr|R_{n+m}(f^-;[-1,1]). \end{equation*} \notag $$
Applying here (3.6) and Theorem 4 we obtain
$$ \begin{equation} R_n(g;[-1,1])\geqslant-c_1(\alpha)\biggl(\frac{5}{12}\biggr)^m +\frac{2c_3(\alpha)}{\pi}\biggl|\cos\frac{\pi\alpha}{2}\biggr| \exp(-\pi\sqrt{\alpha(n+m)}). \end{equation} \tag{3.8} $$
Here $c_3(\alpha)$ is the constant in Theorem 4. Since $(\alpha+1)/{2} \notin \mathbb{N}$, it follows that $\cos({\pi\alpha}/{2}) \ne 0$, so that there exists $n(\alpha)\in\mathbb{N}$ such that for each ${n\geqslant n(\alpha)}$ and ${m_n=[2\pi\sqrt{\alpha n}]}$
$$ \begin{equation*} c_1(\alpha)\biggl(\frac{5}{12}\biggr)^{m_n} \leqslant\frac{c_3(\alpha)}{\pi}\biggl|\cos\frac{\pi\alpha}{2}\biggr| \exp(-\pi\sqrt{\alpha(n+m_n)}). \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} R_n(g;[-1,1])\geqslant\frac{c_3(\alpha)}{\pi}\biggl| \cos\frac{\pi\alpha}{2}\biggr|\exp(-\pi\sqrt{\alpha(n+m_n)}) \geqslant c_4(\alpha)\exp(-\pi\sqrt{\alpha n}). \end{equation*} \notag $$
This completes the proof of the lower bound in Theorem 1.

3.2. Approximations of some elementary functions

Various functions can be expressed in terms of the Cauchy transform. The most important and interesting case in our opinion is the one considered in Theorem 1. Here is another example of elementary functions which can be expressed in terms of the even and odd Cauchy transforms considered here, and also estimates for their best uniform rational approximations which follow from Theorems 6 and 4.

For the measure $\mu(t)=t^k$, $k\in\mathbb N\setminus\{1\}$, the integrals in (2.2) and (2.3) are easy to calculate. Using Theorems 4 and 6 we can find the weak asymptotic behaviour of the best rational approximations of the following functions on $[-1,1]$, where $l\in\mathbb N$:

$$ \begin{equation*} \varphi_l(x)=x^l\log|x| \quad \text{for } x\ne0, \qquad \varphi_l(0) = 0. \end{equation*} \notag $$

Namely, the following asymptotic relation holds:

$$ \begin{equation*} R_{n}(\varphi_l;[-1,1])\asymp \exp(-\pi\sqrt{ln}), \qquad n\in\mathbb{N}. \end{equation*} \notag $$

In conclusion, the author expresses her sincere gratitude to the referee, who read the paper attentively and made valuable comments, which contributed to the better presentation of the issues considered here.


Bibliography

1. D. J. Newman, “Rational approximation to $|x|$”, Michigan Math. J., 11:1 (1964), 11–14  crossref  mathscinet  zmath
2. A. A. Gončar, “Estimates of the growth of rational functions and some of their applications”, Math. USSR-Sb., 1:3 (1967), 445–456  mathnet  crossref  mathscinet  zmath  adsnasa
3. A. P. Bulanov, “Asymptotics for least deviation of $|x|$ from rational functions”, Math. USSR-Sb., 5:2 (1968), 275–290  mathnet  crossref  mathscinet  zmath  adsnasa
4. J. Tzimbalario, “Rational approximation to $x^{\alpha}$”, J. Approx. Theory, 16:2 (1976), 187–193  crossref  mathscinet  zmath
5. N. S. Vjačeslavov (Vyacheslavov), “On the least deviations of the function $\operatorname{sign}x$ and its primitives from the rational functions in the $L_p$ metrics, $0<p\leqslant\infty$”, Math. USSR-Sb., 32:1 (1977), 19–31  mathnet  crossref  mathscinet  zmath  adsnasa
6. N. S. Vjačeslavov (Vyacheslavov), “On the approximation of $x^\alpha$ by rational functions”, Math. USSR-Izv., 16:1 (1981), 83–101  mathnet  crossref  mathscinet  zmath  adsnasa
7. N. S. Vyacheslavov, “Rational approximations in weighted spaces on a straight line”, Moscow Univ. Math. Bull., 40:5 (1985), 3–10  mathnet  mathscinet  zmath
8. H. R. Stahl, “Best uniform rational approximation of $x^{\alpha}$ on $[0,1]$”, Acta Math., 190:2 (2003), 241–306  crossref  mathscinet  zmath
9. G. G. Lorentz, M. von Golitschek and Y. Makovoz, Constructive approximation. Advanced problems, Grundlehren Math. Wiss., 304, Springer-Verlag, Berlin, 1996, xii+649 pp.  mathscinet  zmath
10. P. G. Potseiko and E. A. Rovba, “Vallée Poussin sums of rational Fourier–Chebyshev integral operators and approximations of the Markov function”, St. Petersburg Math. J., 35:5 (2024), 879–896  mathnet  crossref  zmath
11. S. N. Bernstein, “On the best approximation of $|x|^p$ by polynomials of quite high degree”, Collected papers, v. II, Publishing house of the USSR Academy of Sciences, Moscow, 1954, 262–272 (Russian)  mathscinet  zmath
12. I. I. Ibragimov, “On the best approximation by polynomials of the functions ${[ax+b|x|]|x|^{s}}$ on the interval $[-1,+1]$”, Izv. Akad. Nauk SSSR Ser. Mat., 14:5 (1950), 405–412 (Russian)  mathnet  mathscinet  zmath
13. D. S. Lubinsky, “On the Bernstein constants of polynomial approximation”, Constr. Approx., 25:3 (2007), 303–366  crossref  mathscinet  zmath
14. M. I. Ganzburg, “Asymptotic behaviour of the error of polynomial approximation of functions like $|x|^{\alpha+i\beta}$”, Comput. Methods Funct. Theory, 21:1 (2021), 73–94  crossref  mathscinet  zmath
15. T. S. Mardvilko, “Uniform rational approximation of even and odd continuations of functions”, Math. Notes, 115:2 (2024), 215–222  mathnet  crossref  mathscinet  zmath
16. T. S. Mardvilko and A. A. Pekarskii, “Using the real Hardy–Sobolev space on the line to examine the rate of uniform rational approximations to a function”, Zh. Belarus Gos. Univ. Mat. Informat., 3 (2022), 16–36 (Russian)  mathnet  crossref  mathscinet
17. T. S. Mardvilko, “Relations between best uniform polynomial approximations of functions and their even and odd extensions”, Itogi Nauki Tekhn. Ser. Sovr. Mat. Pril. Temat. Obzory, 229 (2023), 47–52, VINITI, Moscow (Russian)  mathnet  mathscinet
18. A. A. Gončar (Gonchar), “On the speed of rational approximation of some analytic functions”, Math. USSR-Sb., 34:2 (1978), 131–145  mathnet  crossref  mathscinet  zmath  adsnasa
19. T. Ganelius, “Orthogonal polynomials and rational approximation of holomorphic functions”, Studies in pure mathematics, To the memory of P. Turán, Birkhäuser Verlag, Basel, 1983, 237–243  crossref  mathscinet  zmath
20. J.-E. Andersson, “Rational approximation to function like $x^{\alpha}$ in integral norms”, Anal. Math., 14:1 (1988), 11–25  crossref  mathscinet  zmath
21. H. Stahl and V. Totik, General orthogonal polynomials, Encyclopedia Math. Appl., 43, Cambridge Univ. Press, Cambridge, 1992, xii+250 pp.  crossref  mathscinet  zmath
22. A. A. Pekarskiĭ, “Best uniform rational approximations of Markov functions”, St. Petersburg Math. J., 7:2 (1996), 277–285  mathnet  mathscinet  zmath
23. A. O. Gelfond, Calculus of finite differences, Int. Monogr. Adv. Math. Phys., Hindustan Publishing Corp., Delhi, 1971, vi+451 pp.  mathscinet  mathscinet  zmath  zmath
24. K. I. Babenko, Basics of numerical analysis, 2nd ed., Regulyarnaya i Khaoticheskaya Dinamika, Moscow–Izhevsk, 2002, 848 pp. (Russian)
25. V. K. Dzyadyk, Introduction to the theory of uniform approximation of functions by polynomials, Nauka, Moscow, 1977, 511 pp. (Russian)  mathscinet  zmath
26. P. K. Suetin, Classical orthogonal polynomials, Nauka, Moscow, 1976, 327 pp. (Russian)  mathscinet
27. E. V. Kovalevskaya and A. A. Pekarskii, “Constructing Extremal Blaschke products”, Vesn. Grodno Univ. Ser. 2, 7:1 (2017), 6–13 (Russian)
28. V. N. Rusak, Rational functions as approximation apparatus, Publishing house of Belorussian State University, Minsk, 1979, 174 pp. (Russian)  mathscinet
29. I. K. Daugavet, Introduction to the theory of approximation of functions, Publishing house of Leningrad State University, Leningrad, 1977, 184 pp. (Russian)  mathscinet  zmath

Citation: T. S. Mardvilko, “Uniform rational approximation of the odd and even Cauchy transforms”, Sb. Math., 216:2 (2025), 239–256
Citation in format AMSBIB
\Bibitem{Mar25}
\by T.~S.~Mardvilko
\paper Uniform rational approximation of the odd and even Cauchy transforms
\jour Sb. Math.
\yr 2025
\vol 216
\issue 2
\pages 239--256
\mathnet{http://mi.mathnet.ru/eng/sm10116}
\crossref{https://doi.org/10.4213/sm10116e}
\mathscinet{https://mathscinet.ams.org/mathscinet-getitem?mr=4894004}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2025SbMat.216..239M}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001487976300004}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-105004897343}
Linking options:
  • https://www.mathnet.ru/eng/sm10116
  • https://doi.org/10.4213/sm10116e
  • https://www.mathnet.ru/eng/sm/v216/i2/p110
  • This publication is cited in the following 1 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025