Abstract:
The aim of the paper is to prove that mixed second-order derivatives of a function of several variables can be approximated in the $L_1$ norm by
similar derivatives of modified Bernstein–Stancu polynomials in the case of the minimal possible smoothness.
Bibliography: 23 titles.
Keywords:
modified Bernstein–Stancu polynomial, $L_1$-approximation of mixed derivatives.
For a new (at the time) proof of Weierstrass’s theorem on polynomial approximation by use of Bernoulli’s law of large numbers, Bernstein [1] proposed to use the polynomials
subsequently named after him. The artificial (at first sight) interpretation of the variable $x$ as a probability and an application of the law of large numbers produced the convergence theorem
There are several modifications of Bernstein polynomials (see, in particular, [2]–[9]). For various aspects of the theory of Bernstein polynomials, see the books by Lorentz [2] and Videnskii [5], and also Bustamante’s book [10] and [11]. In [12] it was shown that natural analogues of Bernstein polynomials can be used for simultaneous approximation of multivariate functions and their derivatives of certain order or index on cubes and simplexes under the assumption that the derivatives of this order or index are continuous (that is, not necessarily all derivatives of, say, the second order, but only the ones that exist). In our paper, based on a certain modification of Bernstein–Stancu polynomials (see [3]), we examine the problem of approximation of the second-order mixed derivative in the integral norm without making the assumption that this derivative is continuous. Results of this type can be useful in applications if the derivative in question is not continuous or if its continuity is not known a priori. Note that precisely second-order derivatives (along with first-order ones) and their approximations are important, in particular, in stochastic analysis. For the completeness of presentation we also formulate and prove an auxiliary result on the approximation of first-order partial derivatives in the same norm.
§ 2. On previous results
Before formulating the main result of this paper, we recall Bernstein’s fundamental theorem on the approximation of a continuous function on $[0, 1]$ and some results on the approximation of derivatives, apparently obtained independently by Chlodovsky and Kantorovich. Next we recall Kantorovich’s theorem on the approximation of a derivative almost everywhere and state Proposition 4 (obtained not so long ago). All these results, except the last one, pertain to the approximation of univariate functions on $[0,1]$, whereas Proposition 4 (see below) is concerned with the approximation of a bivariate function (and its derivatives); however, this can help the reader get up to speed quickly. The result on the approximation of derivatives (Proposition 2) was announced in Chlodovsky’s and Kantorovich’s talks at the First All-Union Congress of Mathematicians.1[x]1The talks “On some properties of S. N. Bernstein polynomials” by Chlodovsky and “On some expansions in S. N. Bernstein polynomials” by Kantorovich were mentioned in the Proceedings of the First All-Union Congress of Mathematicians held at Khar’kov (1930), ONTI NKTP SSSR, Moscow–Leningrad, 1936, p. 22 (Russian). However, this result has never been fully published by any of them.2[x]2See also Kantorovich’s reminiscences about his talk at that congress [14]. To these authors’ knowledge, the first published proof of Proposition 2 (with a mention of the names of Chlodovsky and Kantorovich) belongs to Lorentz3[x]3The second initial ‘G’ in Lorentz’s name in [2] is fictitious; for the story of the transition from the (original) form G. P. Lorentz (used, for example, in the abstract of [13]) to G. G. Lorentz (chosen by Lorentz himself), see [15] and the editor’s remark 4 there. [13].
Below results taken from other papers are formulated as propositions. For readily accessible results, we refer the reader to [13], Theorem 1, [2], Theorem 2.1.1, and also4[x]4The book [5] contains, in the actual fact, two textbooks by Videnskii combined in a single edition in 2024; the numeration of chapters and theorems there is not consistent. to [5], Theorem 15.1, and [5], Theorems 10.1 and 10.2.
Proposition 1 (Bernstein [1]). If $f$ is a continuous function on $[0,1]$, then the sequence of polynomials $B_n(f,x)$ converges uniformly to $f(x)$ on $[0,1]$ as $n \to \infty$.
Proposition 2 (Lorentz [13], Theorem 1). If the $r$th derivative $f^{(r)}(x_0)$ of a bounded function $f$ exists at a point $x_0$, then
In [9], to study the approximation properties of mere Borel measurable functions, rather than regular ones, it was proposed to modify the polynomials by replacing the integrable function $f(x)$ on $[0,1]$ by
holds at each point $x \in (0,1)$ where $f(x)$ is equal to the derivative of the integral $ \displaystyle F(x)=\int_{0}^{x}f(t)\,dt$ (that is, almost everywhere).
For an exposition of this result also see Theorem 2.1.1 in [2] with a reference to Kantorovich.
It is also worth pointing out that [16] deals with the convergence of Bernstein polynomials for Riemann integrable functions at points of discontinuity; however, this problem is not what we are interested in here.
The one-dimensional analogues of the modified multivariate polynomials $\widetilde B_n$ (see (3.1) below) would be closer in spirit to Kantorovich’s modification mentioned above, however, the type of convergence in the main theorem that follows is slightly different. In addition, it is worth recalling that in the multivariate case there is no analogue of the one-dimensional Lebesgue differentiation theorem, and so the result of [9] cannot be automatically carried over to the multivariate setting. We also note that in our paper we consider the problem of the approximation of the second-order mixed derivative, which is a different task.
For a function of two variables one of the two classical variants of Bernstein polynomials is as follows:
where $0\leqslant x_1, x_2 \leqslant 1$, and similarly, it can be defined on the $d$-dimensional cube $0\leqslant x_1, \dots, x_d \leqslant 1$. In what follows we use this form. (In the second classical variant, given a polynomial, a function is approximated on a simplex, rather than on a square/cube; of course, both variants are the same in the one-dimensional case.) In [12], given a function $f\in C^m(\mathbb{R}^d)$, all the derivatives of order $m$ of multivariate Bernstein polynomials were shown to converge uniformly on the $d$-dimensional cube to the corresponding derivatives of $f$, namely, $ {B}_{n}^{(m)}(f,x) \rightrightarrows f^{(m)}(x)$ as $n \to \infty$ (see Theorem 4 in [12]); a similar result will also be proved for multi-indices $\mathbf k=(k_1,\dots,k_d)$, namely, ${B}_{n}^{(\mathbf k)}(f,x) \rightrightarrows f^{(\mathbf k)}(x)$ as $n \to \infty$ under the assumption that only the derivative corresponding to this multi-index exists and is continuous (without assuming that this also holds for all derivatives of this order). That this fact is of independent interest is demonstrated by Tolstov’s theorem (see Proposition 12 below), which does not assert convergence, but rather the equality of mixed derivatives in each order (however, it is required in its statement that all derivatives of order 2 exist). The same assumption on all derivatives of some order is typical for some other versions of this result. Note, however, that this assumption is not imposed in Proposition 4 or the theorem.
Proposition 4 (see [12], Theorem 4). (1) If $m>0$ and $f \in C^{m}({\mathbb{R}}^d)$, then
$$
\begin{equation*}
B_{n}^{(m)}(f,x) \rightrightarrows f^{(m)}(x), \qquad n \to \infty, \quad x \in {{K}}^{d},
\end{equation*}
\notag
$$
where $K^{d}$ is the $d$-dimensional cube.
(2) If ${\mathbf k}=(k_1,\dots,k_d)$ is a multi-index and $f \in C^{\mathbf k}(\mathbb{R}^d)$, that is, the derivative corresponding to the multi-index ${\mathbf k}$ is defined and continuous on $\mathbb R^d$ (it is not assumed that all derivatives of order $|{\mathbf k}|$ exist), then
$$
\begin{equation*}
B_{n}^{({\mathbf k})}(f,x) \rightrightarrows f^{({\mathbf k})}(x), \qquad n \to \infty, \quad x \in {{K}}^{d},
\end{equation*}
\notag
$$
where $K^{d}$ is the $d$-dimensional cube.
In comparison to this result, in the main theorem that follows we succeeded in eliminating the assumption that the mixed second-order derivative of $f$ are continuous on $[0,1]^2:=K \, (\equiv K^2)$, and, under the condition $\dfrac{\partial^2 f}{\partial x_1 \,\partial x_2}\in L_1(K)$ and some others, for a modified Bernstein–Stancu polynomial depending on some random vector the second-order mixed derivative of this polynomial with respect to the variables $x_1$ and $ x_2$ is shown to converge to the second-order mixed derivative of the function $f$ in the $L_1$-norm with respect to $(x_1,x_2)\times\omega$ relative to the direct product of the two-dimensional Lebesgue measure on $K$ and a probability measure $\mathsf P$ on a measure space $(\Omega, \mathcal F, \mathsf P)$ (for the precise definitions, see § 3). Here $(x_1,x_2) \in K$ and $\omega \in \Omega$. It should be noted that this convergence is slightly different from the one in the above result from [12]. Proposition 4 is required below, in the proof of the main theorem of § 5; we formulated this result above for easier references and the convenience of the reader. In what follows we consider only bivariate functions, because we are concerned here with second-order mixed derivatives, to which consideration of a larger number of variables is not relevant.
§ 3. The main result
To formulate the main result we consider two independent random variables5[x]5Recall that $U[0,1]$ is a standard notation for a random variable uniformly distributed on $[0,1]$; $\mathsf E$ is the expectation with respect to the measure $\mathsf P$. $\alpha_i \in U[0,1]$, $i=1,2$, on a probability space $(\Omega, \mathcal F, \mathsf P)$. We also set $\alpha= (\alpha_1, \alpha_2)$. The Bernstein–Stancu type polynomials, which we study in our paper, have the form
We also extend $f(x_1,x_2)$ by zero outside $K=[0,1]^2$. So polynomials are defined on the square, because we are interested in the two-dimensional version of these polynomials; of course, they can also similarly be defined on the $d$-dimensional cube. As noted above, similar polynomials for nonrandom parameters $\alpha_i$ were proposed by Stancu [3].
Theorem. Let the Borel function $f(x_1,x_2)$ be bounded on $[0,1]^2=K$. Assume that the (classical) derivatives $\dfrac{\partial{f(x_1,x_2)}}{\partial{x_1}}$ and $ \dfrac{\partial{f(x_1,x_2)}}{\partial{x_2}} $ exist and lie in $L_1(K)$ and the second-order mixed derivative
Corollary. If, under the hypotheses of the theorem, the mixed derivative $f_{x_2x_1}$ exists and lies in $L_1(K)$, then convergence as in (3.2) holds and for almost all $(x_1,x_2)$,
This equality is verified, for example, by an appeal to any of Propositions 5–8 (see § 6 below).
§ 4. Auxiliary results
As far as we know, the following lemma is well known in the theory of integration. However, we were unable to find an accurate reference, so following an advice of some colleagues, we present it with a proof, of course, without claiming the authorship.
Lemma 1. 1. Let $g\in L_1([0,1])$, and let $g$ be extended by zero outside $[0,1]$. Then
Here the last integral tends to zero as $t\to 0$ since $h(x+t)$ converges pointwise to $h(x)$ and since the function $h$ is bounded by the Lebesgue dominated convergence theorem.
2. Again, let $\delta>0$, and let $h\in C(K)$ be such that
As in the one-dimensional case, since $h$ is (uniformly) continuous and bounded on $K$ the last integral tends to zero as $t_1,t_2\to 0$ by the Lebesgue dominated convergence theorem.
In what follows we deal with functions given by rather complicated expressions (of course, the corresponding derivatives are even more complicated), and so it is convenient to use the shorthand notation
which reduces significantly the amount of mathematics, even though the procedure is still quite long.
Here the subscript $x_i$ in $\Delta_{(x_i)}$ means only that the difference is taken with respect to the variable $x_i$; note also that there may be no actual dependence on this variable, at least, if it is not indicated as an argument of the function to which the corresponding convolution operator is applied.
Lemma 2. 1. Let $f$ be a bounded Borel function on $K$ and for all $x_1$ and $x_2$ let the classical first-order derivative $\dfrac{\partial f(x_1, x_2)}{\partial x_1} \in L_1(K)$ exist. Then
also holds for the derivative $f_{x_2}$ with respect to the other variable, provided that this derivative $\dfrac{\partial f(x_1, x_2)}{\partial x_2} $ exists and lies in $L_1(K)$.
2. For all $x_1$ and $x_2$ let the classical mixed derivative $ \dfrac{\partial^2 f(x_1, x_2)}{\partial x_1\, \partial x_2} $ exist and lie in $L_1(K)$ (in particular, the function $f_{x_2}(x_1, x_2)$ is bounded for each $x_2$). Then
Similarly, if for all $x_1$ and $ x_2$ the classical mixed derivative $\dfrac{\partial^2 f(x_1, x_2)}{\partial x_2\, \partial x_1} $ exists and lies in $L_1(K)$ (and, in particular, the function $f_{x_1}(x_1, x_2)$ is bounded for each $x_1$), then, as $n \to \infty$,
Proof. We will repeatedly make use of auxiliary functions $g\in C(K)$ to approximate the function $f$, its derivatives, and also $F^\varepsilon$ for some integral differences. In general, these functions will be different for different $f$, $f_{x_1}$, $f_{x_2}$ and $f_{x_1x_2}$ and different differences.
1. We verify (4.1). To do this we approximate $f$ by smooth functions ${f^\varepsilon \mkern-2mu\!\in\! C^\infty\mkern-1mu(\mathbb R^2\mkern-1mu)}$. For definiteness we assume that $f^{\varepsilon}$ approximates $f$ by convolutions with the same nonnegative kernel $\varphi \in C_0^{\infty}(\mathbb R)$ (where $0$ means that the support in each of the two variables is compact), where $\displaystyle \int_{-\infty}^{\infty}\varphi(x)\,dx=1$ and $ \varphi^{\varepsilon}(x)=\dfrac{1}{\varepsilon}\varphi\biggl(\dfrac{x}{\varepsilon}\biggr)$. It is known that
provided that all integrals here are absolutely convergent (which is the case by the hypotheses of the lemma). This proves the claim.
We next employ the auxiliary function $f^\varepsilon$ to estimate the upper limit as $n\to\infty$ of the (nonnegative) integral on the left in (4.1). This integral depends on $n$, but not on $\varepsilon$. We claim that this limit is majorized by some (also nonnegative) expression of $\varepsilon$, which, in turn, tends to zero as $\varepsilon \to 0$. In other words, in estimating auxiliary expressions depending on $n$ and $\varepsilon$ we will be concerned with repeated limits of the form $\lim_{\varepsilon\to 0} \limsup_{n\to\infty}$. (If the integrands were not nonnegative, then, of course, we would also have to deal with limits of the form $\lim_{\varepsilon\to 0} \liminf_{n\to\infty}$; however, this is not the case here.) We have
Here, since $g$ is uniformly continuous and $\varphi$ has a compact support, the integrand $g^{\varepsilon}(x_1, x_2)-g(x_1,x_2)$ on the right-hand side tends to zero as $\varepsilon\to 0$ for all ${(x_1,x_2)\in K}$. Now the required convergence (4.9) follows, since $\delta>0$ is arbitrary and the integral $J^\varepsilon_1$ is independent of $\delta$.
Consider the second integral. For fixed $\varepsilon$, as $ n \to \infty$, we have
where we have used the definition of the derivative of the continuous function $f^{\varepsilon}$ and employed Lebesgue’s dominated convergence theorem; in addition, the finite differences $n \biggl(f^{\varepsilon}\biggl(x_1+\dfrac{1}{n}, x_2\biggr)-f^{\varepsilon}(x_1, x_2) \biggr)$ are bounded by the mean value theorem or by the fundamental theorem of calculus.
Consider $J_{3}^{n,\varepsilon}$. To curtail the arguments, we transform the integrand in $J_{3}^{n,\varepsilon}$ first. We have
Convergence (4.2), which is quite similar, follows by interchanging $x_1$ and $x_2$.
2. Let us prove (4.3). Proceeding as in the similar claim for the first derivatives, we will show that, under the hypotheses of this assertion of the lemma, the second mixed derivative $f$ also commutes with the operation of convolution with the function $\varphi^\varepsilon$,
Next we use the fact that the second-order mixed derivative can be interchanged with the double integral in the definition of the convolution, provided that all required integrals are absolutely convergent. Indeed, by (4.8) above we have
(all integrals here are absolutely convergent by the hypotheses of the lemma). Then for the inner integral the variable $x_1$ is a parameter, and so, since the right-hand side is absolutely convergent,
Let us prove that each of these integrals converges to zero in the sense of the limits $\lim_{\varepsilon\to 0} \lim_{n\to\infty}(\dots)$. First we claim that
Here the integral on the right in the last inequality tends to zero as $\varepsilon \to 0$ because $g$ is uniformly continuous. The integral $J_{4}^{\varepsilon}$ is independent of $\delta$, and so we have the required convergence (4.12) to zero.
Consider the term $J_{5}^{n,\varepsilon}$. By the definition of the derivative and Lebesgue’s dominated convergence theorem, as $n \to \infty$, we have
for the bounded smooth function $f^{\varepsilon}$; here the finite differences
$$
\begin{equation*}
n \biggl(f^{\varepsilon}_{x_2}\biggl(x_1+\dfrac{1}{n}, x_2\biggr)-f^{\varepsilon}_{x_2}(x_1, x_2) \biggr)
\end{equation*}
\notag
$$
are bounded by the mean value theorem or by the fundamental theorem of calculus, similarly to the corresponding result for $J_{2}^{n,\varepsilon}$. Hence
Consider the integral $J_{6}^{n,\varepsilon}$. Using the fundamental theorem of calculus and since the differentiation operator (with respect to $x_1$) and convolution commute, we transform the integrand in $J_{6}^{n,\varepsilon}$ as follows:
The expression on the right appears naturally by repeated differentiation of the polynomials $\widetilde B_{\alpha,n}$. Note that, by the binomial formula, for all $x_1$ and $x_2$,
Now formula (5.2) follows from the above representation for the mixed derivative $\dfrac{\partial^2 \widetilde {B}_{\alpha,n}(f,x_1,x_2)}{\partial x_1 \,\partial x_2}$ and from Lemma 2.
Let us estimate the (nonnegative) integral $I^n$ on the right in (5.2) from above. To do this we use the smoothed function $f^\varepsilon=f*\overline \varphi^\varepsilon$ from the proof of Lemma 2. We have
All three integrals $I_k^{n,\varepsilon}$, $1\leqslant k\leqslant 3$, are nonnegative, and the second integral $I_2^{n,\varepsilon}$ does not contain the random variables $\alpha_1$ and $\alpha_2$. Hence the expectation sign before the double integral in $I_2^{n,\varepsilon}$ can be dropped. We claim that
Now $I_{{21}b}^{n,\varepsilon} \to 0$ as $n\to\infty$ by (4.3) in Lemma 2. A similar analysis shows that $\lim_{n\to\infty} I_{21a}^{n,\varepsilon}=0$, where we use formula (4.7) from Lemma 2.
Consider the integral $I_{23}^{n}$. Let us estimate this expression from above by the sum of two integrals. We have $\Delta_{(x_2)}\Delta_{(x_1)}f(x_1, x_2)=\Delta_{(x_1)}\Delta_{(x_2)}f(x_1, x_2)$, and so
Let us transform $I_3^{n, \varepsilon}$ by using the well-known combinatorial formulae expressing the gamma function of an integer argument in terms of factorials. As a result, we obtain an expression (for estimation of $I_3^{n, \varepsilon}$) that is analogous to $I_2^{n, \varepsilon}$. We have
We consider separately each term of this double sum. Writing the expectation as an integral (by the properties of a uniform distribution on $[0,1]$) and changing the variables by the formula $a_i/n=a'_i$, $i=1,2$, we find that for $0\leqslant k_1,k_2<n$ we have the equalities
(for the evaluation of the expectation the random variables $\alpha_1$ and $\alpha_2$ are replaced by the variables of integration $a_1$ and $a_2$, respectively).
for an arbitrary small $\varepsilon>0$, which was fixed above for the integrals $I_2^{n, \varepsilon}$ and $I_3^{n, \varepsilon}$, follows from Proposition 4 for $d=2$ and multi-index $k=(1,1)$, since $f^\varepsilon$ is smooth.
We present some theorems on mixed derivatives, which are formally different but close in spirit to the corollary in § 3. We also state a related result, due to Stepanov, on the first-order derivative.
Proposition 5 (see [17], Theorem 8.2.3). Let the function $f\colon G \mapsto \mathbb R$ have the partial derivatives $f_{xy}$ and $f_{yx}$ in a domain $G$. Then these derivatives are equal at each point $(x_0,y_0)\in G$ where they are continuous:
A useful extension will be given in Proposition 10, which is Exercise 8.4.2b in [17]. In that result, it is not assumed a priori that both partial derivatives exist at a point. Instead, it is assumed that only one of these derivatives exists; the existence of the other follows from the assertion of the proposition. In the following classical theorem it is also assumed that the second differential exists at a point.
Proposition 6 (Young [18]). Let $f\colon D \subset \mathbb{R}^{2} \to \mathbb{R}$ and $(x^0_1, x^0_2) \in D$. Assume that the first-order partial derivatives $ \partial f/\partial x_1$ and $\partial f/\partial x_2$ of $f$ exist in some neighbourhood of $(x^0_1, x^0_2)$ and are differentiable at $(x^0_1, x^0_2)$. Then the second-order mixed derivatives of $f$ at $(x^0_1, x^0_2)$ are equal:
Young’s original (very long) proof of 1908 is quite obscure, as it sometimes happens to pioneering results. For a modern elegant and short argument, see § 8.12.3 of [19]. For completeness we state this result.
Proposition 7 (see [19], § 8.12.3). Let $G$ be an open subset of $\mathbb R^n$. If a map $f$ from $G$ to a Banach space $F$ is twice differentiable at $x_0$, then the partial derivatives $D_i f$ are differentiable at $x_0$ and
(2) Let $f \in C^1(U, \mathbb R^2)$, and let the mixed derivative $f_{xy} \in C(U, R^2)$ exist. Then the derivative $f_{yx}$ also exists and
$$
\begin{equation*}
f_{yx}(x,y)=f_{xy} (x,y) \quad \textit{for all } (x,y) \in U.
\end{equation*}
\notag
$$
Proposition 9 (see [21] and [22], Theorem 3.1.9). Let $E\subset G \subset \mathbb{R}^{m}$ be a measurable set, where $G$ is open, let $f\colon G \to \mathbb{R}^{n}$ be a locally bounded measurable function, and let $\limsup_{x\to a}\dfrac{|f(x)-f(a)|}{|x-a|} < \infty$ for almost all points $a\in E$. Then $f$ is differentiable at $L_m$-almost all points of $E$, where $L_m$ is the m-dimensional Lebesgue measure.
Of course, this proposition also applies to the second derivatives. However, the problem of the equality of mixed derivatives is not considered in this statement.
Proposition 10 (see [17], Exercise 8.4.2b). Let the function $f$ have partial derivatives $f_x$ and $f_y$ in some neighbourhood of a point $(x_0,y_0)$. Assume that the mixed derivative $f_{xy}$ (or $f_{yx}$) exists in $U$ and is continuous at $(x_0,y_0)$. Then the derivative $f_{yx}$ ($f_{xy}$, respectively) also exists at this point and
Proposition 11 (Tolstov [23], Theorem 7). Let $f(x_1, x_2)$ be a linearly continuous measurable function (that is, $f$ is continuous in each variable) in a domain $G$ and, as concerns its partial derivatives $\dfrac{\partial f(x_1,x_2)}{\partial x_1}$ and $\dfrac{\partial f(x_1,x_2)}{\partial x_2}$, let each derivative with respect to each variable be finite on a set $E$ of positive two-dimensional measure. Then the mixed derivatives $\dfrac{\partial^2 f(x_1,x_2)}{\partial x_1 \,\partial x_2}$ and $\dfrac{\partial^2 f(x_1,x_2)}{\partial x_2 \,\partial x_1}$ exist and agree almost everywhere on $E$.
Proposition 12 (Tolstov [23], Theorem 8). Let $f(x_1, x_2)$ be a linearly continuous function, and let the derivatives $\dfrac{\partial^2 f(x_1, x_2)}{\partial x^2_1}$, $\dfrac{\partial^2 f(x_1, x_2)}{\partial x_1 \,\partial x_2}$, $\dfrac{\partial^2 f(x_1, x_2)}{\partial x_2 \,\partial x_1}$ and $\dfrac{\partial^2 f(x_1, x_2)}{\partial x^2_2}$ exist in a domain $G$. Then
Now the problem of the equality of two mixed derivatives can be solved quite easily (modulo Schwarz’s or Young’s theorems; see Propositions 5 and 6 above) in the case of Sobolev derivatives. For the convenience of the reader, we recall that a function $g$ is the second-order mixed Sobolev derivative of $f$ with respect to the variables $x$ and $y$ in $L_p(G)$ if there exists a sequence of smooth functions $f^n \in C^\infty(G)$ such that
The following elementary result is a consequence of the fact that $f_{xy}^\varepsilon = f_{yx}^\varepsilon$ everywhere if $f^\varepsilon$ is a smooth function.
Proposition 13 (on mixed Sobolev derivatives). Let the $L_p(G)$-function $f$, $p>0$, have a mixed Sobolev derivative $f_{xy}\in L_p(G)$. Then the Sobolev derivative $f_{yx}\in L_p(G)$ also exists and the functions $f_{yx}$ and $f_{xy}$ are equal almost everywhere on $G$, that is, $\|f_{yx}-f_{xy}\|_{L_p(G)}=0$.
Acknowledgement
The authors are grateful to an anonymous referee for several helpful comments and suggestions.
Bibliography
1.
S. Bernstein, “Démonstration du théorème de Weierstraß, fondée sur le calcul des probabilités”, Soobshch. Kharkov. Mat. Obshch. Ser. 2, 13:1 (1912), 1–2
2.
G. G. Lorentz, Bernstein polynomials, 2nd ed., Chelsea Publishing Co., New York, 1997, x+134 pp.
3.
D. D. Stancu, “Some Bernstein polynomials in two variables and their applications”, Soviet Math. Dokl., 1 (1960), 1025–1028
4.
E. V. Voronovskaya, “Determining the asymptotic behaviour of the approximation of functions by S. N. Bernstein polynomials”, Dokl. Akad. Nauk SSSR, 1932, no. 4, 79–85 (Russian)
5.
V. S. Videnskii, Linear positive operators of finite rank. Bernstein polynomials, Lan', Moscow, 2024, 144 pp. (Russian)
6.
Yu. S. Polovinkina, “Generalized Bernstein polynomials”, Current progress in science and education:
mathematics and informatics. (Arkhangelsk 2010), KIRA, Arkhangelsk, 2010, 160–161 (Russian)
7.
G. H. Kirov, “A generalization of the Bernstein polynomials”, Math. Balkanica (N.S.), 6:2 (1992), 147–153
8.
Young Chel Kwun, A.-M. Acu, A. Rafiq, V. A. Radu, F. Ali and Shin Min Kang, “Bernstein–Stancu type operators which preserve polynomials”, J. Comput. Anal. Appl., 23:4 (2017), 758–770
9.
L. V. Kantorovich, “Some expansions in polynomials in S. N. Bernstein's form. I, II”, Dokl. Akad. Nauk SSSR (A), 1930, no. 20, 21, 563–566, 595–600 (Russian)
10.
J. Bustamante, Bernstein operators and their properties, Birkhäuser/Springer, Cham, 2017, xii+420 pp.
11.
I. V. Tikhonov, V. B. Sherstyukov and M. A. Petrosova, “Bernstein polynomials: old and new”, Studies on mathematical analysis, Matem. Forum, 8, Pt. 1, Southern Mathematical Institute of the Vladikavkaz Scientific Center of the Russian Academy of Sciences, Vladikavkaz, 2014, 126–175 (Russian)
12.
A. Yu. Veretennikov and E. V. Veretennikova, “On partial derivatives of multivariate Bernstein polynomials”, Siberian Adv. Math., 26:4 (2016), 294–305
13.
G. Lorentz, “Zur Theorie der Polynome von S. Bernstein”, Mat. Sb., 2(44):3 (1937), 543–556
14.
L. V. Kantorovich, “My journey in science (proposed report to the Moscow Mathematical Society)”, Russian Math. Surveys, 42:2 (1987), 233–270
15.
C. de Boor and P. Nevai, “In memoriam: George G. Lorentz (1910–2006)”, J. Approx. Theory, 162:2 (2010), 465–491
16.
I. Chlodovsky, “Sur la représentation des fonctions discontinues par les polynomes de M. S. Bernstein”, Fund. Math., 13 (1929), 62–72
17.
V. A. Zorich, Mathematical analysis, v. I, Nauka, Moscow, 1981, 544 pp. ; English transl. of 1st ed., v. I, Universitext, Springer-Verlag, Berlin, 2004, xviii+574 pp.
18.
W. H. Young, “On the conditions for the reversibility of the order of partial differentiation”, Proc. Roy. Soc. Edinburgh, 29 (1909), 136–164
19.
J. Dieudonné, Foundations of modern analysis, Pure Appl. Math., 10, Academic Press, New York–London, 1960, xiv+361 pp.
20.
A. Aksoy and M. Martelli, “Mixed partial derivatives and Fubini's theorem”, College Math. J., 33:2 (2002), 126–130
21.
W. Stepanoff (V. Stepanov), “Sur les conditions de l'existence de la différentielle totale”, Mat. Sb., 32:3 (1925), 511–527
22.
H. Federer, Geometric measure theory, Grundlehren Math. Wiss., 153, Springer-Verlag, New York, 1969, xiv+676 pp.
23.
G. P. Tolstov, “On partial derivatives”, Izv. Akad. Nauk SSSR Ser. Mat., 13:5 (1949), 425–446 (Russian)
Citation:
A. Yu. Veretennikov, N. M. Mazutskiy, “On partial derivatives of Bernstein–Stancu polynomials for multivariate functions”, Sb. Math., 216:7 (2025), 877–901