Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2025, Volume 216, Issue 1, Pages 1–24
DOI: https://doi.org/10.4213/sm10060e
(Mi sm10060)
 

This article is cited in 2 scientific papers (total in 2 papers)

Solvability of nonlinear degenerate equations and estimates for inverse functions

A. V. Arutyunov, S. E. Zhukovskiy

V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia
References:
Abstract: For a continuous map $F$ from a finite-dimensional real space to another such space the question of the solvability of the nonlinear equation of the form $F(x)=y$ is investigated for $y$ close to a fixed value $F(\overline x)$. To do this, the concept of $\lambda$-truncation of the map $F$ in a neighbourhood of the point $\overline x$ is introduced and examined. A theorem on the uniqueness of a $\lambda$-truncation is proved. The regularity condition is introduced for $\lambda$-truncations; it is shown to be sufficient for the solvability of the equation in question. A priori estimates for the solution are obtained.
Bibliography: 16 titles.
Keywords: nonlinear equation with parameter, abnormal point, $\lambda$-truncation, directional regularity.
Funding agency Grant number
Russian Science Foundation 24-21-00012
This research was supported by the Russian Science Foundation under grant no. 24-21-00012, https://rscf.ru/en/project/24-21-00012/.
Received: 07.01.2024 and 14.10.2024
Published: 21.03.2025
Bibliographic databases:
Document Type: Article
MSC: 26B10
Language: English
Original paper language: Russian

§ 1. Introduction

Given a continuous map $F\colon \mathbb R^n \to\mathbb R^m$ and two points $\overline x\in \mathbb R^n $ and $\overline y:=F(\overline x)$, this paper looks at the solvability of the nonlinear equation $F(x)=y$ in a neighbourhood of a fixed solution $\overline x$. In this equation $x$ is an unknown and $y$ is a parameter. We seek a solution $x(y)$ and an a priori polynomial estimate for it.

If $F$ is sufficiently smooth and the reference point $\overline x$ is normal, that is, ${F'(\overline x)\!=\!\mathbb{R}^m}$ , then the classical inverse function theorem provides the existence of a solution (for instance, see [1] and [2]). Various generalization of this result ensure the existence of a solution in a neighbourhood of a normal point also in the case of $F$ acting from one infinite-dimensional space to another such space (for instance, see [3]–[5]).

However, if the assumptions of smoothness and normality fail, then things become much more complicated. If $F$ is not differentiable at $\overline x$, then for finding sufficient conditions for the existence of a solution of the equation in question we must appeal to various generalizations of the concept of derivative (for instance, see [6]–[8]). If $F$ is sufficiently smooth, but $\overline x$ is an abnormal point, that is, $F'(\overline x)\ne \mathbb{R}^m$, then some existence theorems also hold in this case. The corresponding results are stated in terms of the first and second derivatives at the abnormal point. More precisely, if $F'(\overline x)$ and $F''(\overline x)$ satisfy certain nondegeneracy conditions, then for all $y$ in some neighbourhood of $\overline y$ a solution $x(y)$ exists; furthermore, it is continuous under some quite general natural assumptions (for instance, see [9]–[12]).

In this paper we obtain conditions for the existence of solutions $x(y)$ in terms of $\lambda$-truncations. They use conditions of order higher than two and can also be analyzed in the case when the above nondegeneracy conditions in terms of the first two derivatives fail. Note that in this case the methods used in the papers cited above do not work.

This paper is a natural development of [13] and [14]. We begin by introducing some concepts we will use.

§ 2. Preliminary information and examples

Let a continuous map $F=(F_1,\dots,F_m)\colon \mathbb R^n \to\mathbb R^m$ and two points $\overline x\in \mathbb R^n $ and $\overline y:=F(\overline x)$ be given. We recall some results from [13].

We denote coordinates of a vector $x \in \mathbb R^n$ by subscripts, that is, we set $x=(x_1,\dots,x_n)$, and we denote the scalar product in $\mathbb{R}^n$ and $\mathbb{R}^m$ by $\langle\,\cdot\,{,}\,\cdot\,\rangle$.

For $r\geqslant 0$ and $x\in \mathbb R^n$ let $B(x,r)$ denote the closed ball of radius $r$ with centre $x$. We also use this notation in $\mathbb R^m$. By a neighbourhood of a point $x$ we mean any open set containing it. In what follows $\mathrm{const}$ means a positive constant whose particular form does not matter for us.

We let $D$ denote the set of nontrivial $n$-dimensional vectors $d$ all of whose coordinates $d_k$ are positive; then $\widehat D \subset D$ is the subset of nontrivial integer vectors $z=(z_1,\dots,z_n)\in D$.

For $x \in \mathbb R^n$, $z \in \widehat D$ and $d \in D$ set

$$ \begin{equation*} x^z=\prod_{k=1}^n x_k^{z_k}\quad\text{and} \quad |x|^d=\prod_{k=1}^n |x_k|^{d_k}. \end{equation*} \notag $$
Here we assume that if $l=0$, then $x_k ^l=|x_k|^l=1$. The vector $z=(z_1,\dots,z_n)$ is the multi-index of the monomial $x^z$, and $z_1+\dots+z_n$ is the degree of this monomial.

Given positive integers $j_i$, $i = 1,\dots,m$, and a vector $\lambda = (\lambda_1,\dots,\lambda_n) \geqslant 0$ (where the inequality sign means that all the coordinates of the vector are nonnegative), assume that for each $i=1,\dots,m$ we have a nonempty finite set of $n$-dimensional vectors $s_{i,j} \in \widehat D$, $j=1,\dots,j_i$, which are pairwise distinct for fixed $i$, that is, if $l_1\neq l_2$, then $s_{i,l_1}\neq s_{i,l_2}$. We also assume that there exist $\alpha_i >0$ such that

$$ \begin{equation} \langle \lambda, s_{i,j} \rangle=\alpha_i \quad \forall \, j=1,\dots,j_i, \quad i=1,\dots,m. \end{equation} \tag{2.1} $$
Set $S_i=\{s_{i,1},\dots,s_{i,j_i}\}$, and let $S=\{S_1,\dots,S_m\}$ denote the family of sets $S_i$, $i=1,\dots,m$.

Note that (2.1) and the inequalities $\alpha_i >0$ mean that $\lambda\neq 0$.

Fix some real numbers $p_{i,j}$ not all of which are equal to zero. We denote their set by $\mathcal P$. For $S$ and $\mathcal P$ we consider the polynomial map $P=(P_1,\dots,P_m)=P^{S,\mathcal P}\colon \mathbb R^n \to\mathbb R^m$ such that its $i$th coordinates $P_i(x)$ has the following form:

$$ \begin{equation*} P_i(x)=\sum_{j=1}^{j_i} p_{i,j} x^{s_{i,j}}, \qquad x\in \mathbb{R}^n. \end{equation*} \notag $$

Definition 2.1. A map $P = P^{S,\mathcal P}$ is called a $\lambda$-truncation of a map $F$ in a neighbourhood of a point $\overline x$ if for each $i=1,\dots,m$ there exists a finite set $D_i \subset D$ such that

$$ \begin{equation} \langle \lambda, d \rangle > \alpha_i \quad \forall d \,\in D_i, \end{equation} \tag{2.2} $$
and $F$ has the following representation for $x\in \mathbb{R}^n$:
$$ \begin{equation} F(x)=F(\overline x)+P(x-\overline x)+\Delta(x-\overline x), \end{equation} \tag{2.3} $$
where for each coordinate $\Delta_i$ of the map $\Delta=(\Delta_1,\dots, \Delta_m)$ there exists a neighbourhood of $\overline x$ in which
$$ \begin{equation} |\Delta_i(x-\overline x)| \leqslant \mathrm{const} \sum_{d \in D_i} |x-\overline x|^d. \end{equation} \tag{2.4} $$

The assumptions that the set $S_i$ is nonempty, the vectors $s_{i,j}$ for fixed $i$ are pairwise distinct and not all coefficients $p_{i,j}$ are zero show that each $\lambda$-truncation $P$ is a nontrivial polynomial map. It is also obvious that $P(0)=0$.

Fix a vector $h \in \mathbb R^n$.

Definition 2.2. We say that a $\lambda$-truncation $P=P^{S,\mathcal P}$ is regular in a direction $h \in \mathbb R^n$ if

$$ \begin{equation} P(h)=0\quad\text{and} \quad \operatorname{im} P'(h)=\mathbb R^m, \end{equation} \tag{2.5} $$
and, in addition, for $k=1,\dots,n$, if $\lambda_k=0$, then $h_k=0$.

If it turns out that $\lambda > 0$, then the above definition becomes slightly simpler and we must only check relations (2.5) to verify regularity in the direction $h$.

The following result, proved in [13], plays the role of an existence theorem.

Theorem 2.1. Let $\lambda>0$, and let $F$ be a continuous map with $\lambda$-truncation $P^{S,\mathcal P}$ in a neighbourhood of $\overline x$ that is regular in some direction $h \in \mathbb R^n$.

Then there exists a neighbourhood $O$ of $\overline y$ such that for each $y \in O$ the equation $F(x)=y$ has a solution $x=x(y)$ such that

$$ \begin{equation*} |x_k(y)-\overline x_k| \leqslant \mathrm{const} |y- \overline y|^{\lambda_k/\alpha}, \qquad k=1,\dots,n. \end{equation*} \notag $$
Here
$$ \begin{equation*} \alpha :=\max\{\alpha_1,\dots,\alpha_m\}>0, \end{equation*} \notag $$
the positive quantities $\alpha_i $ are as in (2.1), and $|\cdot |$ denotes the Euclidean norm of a finite-dimensional vector.

Examples of the applications of the above theorem were presented in [13]. We give two new examples, which we discuss after that.

Example 2.1. Let $n=2$, $m=1$, $\overline x=0 \in \mathbb{R}^2$, and let

$$ \begin{equation*} F\colon \mathbb{R}^2 \to \mathbb{R}, \qquad F(x)=x_1^2+2x_1x_2^2, \quad x=(x_1,x_2). \end{equation*} \notag $$
For this $F$ the equation $F(x) = y$ takes the form $x_1^2 + 2x_1x_2^2 = y$. Applying Theorem 2.1 to the map $F$ at the origin for $\lambda=(2,1)$, $P=F$, $h=(-2,1)$ and $\alpha_1=\alpha=4$, we see that there exist $c>0$, a neighbourhood of the origin ${O\subset \mathbb{R}}$ and a function $x(y)=(x_1(y),x_2(y))$ on $O$ such that $x(y)$ solves the equation $x_1^2+2x_1x_2^2=y$ for all $y\in O$ and, moreover,
$$ \begin{equation*} |x_1(y)|\leqslant c |y|^{1/2}, \quad |x_2(y)|\leqslant c |y|^{1/4} \quad \forall \, y\in O. \end{equation*} \notag $$

This result can also be obtained directly, without Theorem 2.1. Using direct calculations we can verify the following. For $y\geqslant 0$ the function $x(y)=(\sqrt{y},0)$ is a solution satisfying the above estimates. For $y < 0$ such a solution has the form $x(y)=(x_1(y),x_2(y))$, where $x_2(y)=|y|^{1/4}$ and $x_1(y)$ solves the quadratic equation $x_1^2+2x_1x_2^2(y)-y=0$.

The next example is more complicated. It corresponds to a map $F$ from $\mathbb{R}^3$ to $\mathbb{R}^2$.

Example 2.2. Let $n=3$, $m=2$, $\overline x=0\in \mathbb{R}^3$, and let

$$ \begin{equation*} F\colon \mathbb{R}^3\to \mathbb{R}^2, \qquad F(x)=(x_1^2-x_2^2,x_1^2 x_3), \quad x=(x_1,x_2,x_3). \end{equation*} \notag $$
Then the equation $F(x)=y$ takes the form
$$ \begin{equation} \begin{cases} x_1^2-x_2^2=y_1, \\ x_1^2 x_3=y_2. \end{cases} \end{equation} \tag{2.6} $$
Here $x=(x_1,x_2,x_3)\in \mathbb{R}^3$ is unknown and $y=(y_1,y_2) \in \mathbb{R}^2$ is a parameter. We solve this equation for $y$ in a neighbourhood of the origin.

We apply Theorem 2.1 to $F$ at the origin. For $\lambda=(1,1,a)$, where $a>0$ is arbitrary, the map $P:=F$ is a $\lambda$-truncation of $F$ in a neighbourhood of the origin. Furthermore, $\alpha_1=2$, $\alpha_2=2+a$ and $\alpha=\alpha_2=2+a$. As $h$ we can take $h=(1,1,0)$. Then the assumptions of Theorem 2.1 are fulfilled, and thus there exist $c>0$, a neighbourhood of the origin $O\subset \mathbb{R}^2$ and a function $x(y)=(x_1(y),x_2(y),x_3(y))$ on $O$ such that $ x(y)$ solves system (2.6) for each $y\in O$ and, moreover,

$$ \begin{equation} |x_1(y)|\leqslant c |y|^{1/(2+a)}, \quad |x_2(y)|\leqslant c |y|^{1/(2+a)}\quad\text{and} \quad |x_3(y)|\leqslant c |y|^{a/(2+a)} \quad \forall \, y\in O. \end{equation} \tag{2.7} $$

We obtain the same result without using Theorem 2.1 directly, but just indicating explicitly the neighbourhood $O$, number $c$ and solution $x(y)$. Set $c:=2$ and $O:=\{y\in \mathbb{R}^2\colon |y|<(1/2)^{(2+a)/a}\}$. Then $|y|<1$, and we have

$$ \begin{equation} |y|<\frac{1}{2} \cdot |y|^{2/(2+a)} \quad \forall\, y\in O. \end{equation} \tag{2.8} $$
In fact, since $a>0$, it follows from the definition of $O$ that $|y|<(1/2)^{(2+a)/a}<1$ for $y\in O$. Hence $|y|^{a/(2+a)}<1/2$, which is equivalent to the inequality ${|y|^{1-2/(2+a)}<1/2}$, which in turn implies (2.8).

Set $x(0):=0$. Fix $y\in O$, $y\neq 0$, and let $x_2(y):=|y|^{1/(2+a)}$. Then

$$ \begin{equation} y_1+x_2^2(y)>0 \end{equation} \tag{2.9} $$
because $y_1+x_2^2(y)=y_1+|y|^{2/(2+a)}\geqslant -|y|+|y|^{2/(2+a)}>0$, where the strict inequality holds because $|y|<1$ and $a>0$. Therefore, by (2.9) the functions
$$ \begin{equation*} x_1(y):=\sqrt{y_1+x_2^2(y)}\quad\text{and} \quad x_3(y):=\frac{y_2}{y_1+x_2^2(y)} \end{equation*} \notag $$
are well defined.

Now we show that $x(y)=(x_1(y),x_2(y),x_3(y))$ is the required solution of (2.6) for each $y\in O$. This is obvious for $y=0$. Let $y\in O$ and $y\neq 0$. Substituting $x(y)$ into (2.6) we immediately see that $x(y)$ solves the system. Moreover,

$$ \begin{equation*} \begin{gathered} \, |x_1(y)|^2 =y_1+|y|^{2/(2+a)}\leqslant |y|+|y|^{2/(2+a)} \leqslant 2 |y|^{2/(2+a)}, \\ |x_2(y)| =|y|^{1/(2+a)}\leqslant 2 |y|^{1/(2+a)} \end{gathered} \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, |x_3(y)| &=\biggl|\frac{y_2}{y_1+x_2^2(y)}\biggr| \stackrel{(2.9)}{\leqslant} \frac{|y|}{y_1+x_2^2(y)} \stackrel{(2.8)}{\leqslant} \frac{|y|}{-|y|+|y|^{2/(2+a)}} \\ &\!\!\stackrel{(2.8)}{\leqslant} \frac{|y|}{-\frac{1}{2}|y|^{2/(2+a)}+|y|^{2/(2+a)}} =2|y|^{a/(2+a)}. \end{aligned} \end{equation*} \notag $$

Thus we have proved the result of Theorem 2.1 directly for the map in Example 2.2, that is, we have shown that $x(y)$ obtained above solves (2.6) and satisfies (2.7).

These examples show that using Theorem 2.1 in an analysis of nonlinear equations can be easier that calculating directly, even for spaces of small dimension.

Some questions arising in connection with Theorem 2.1 are as follows:

We answer these questions in what follows, but first we turn to the uniqueness theorem for $\lambda$-truncations.

§ 3. Main results

Under the assumption $\lambda\geqslant 0$ we can state the following uniqueness theorem.

Theorem 3.1. Given a continuous map $F\colon \mathbb{R}^n\to \mathbb{R}^m$ and a vector $\lambda\geqslant 0$, assume that there exists a polynomial map $P$ that is a $\lambda$-truncation of $F$ in a neighbourhood of a point $\overline x$.

Then this $\lambda$-truncation of $F$ us unique, that is, for each $\lambda$-truncation $\widetilde{P}$ of $F$ in a neighbourhood of $\overline x$, the equality $P=\widetilde{P}$ holds.

Remark 3.1. Let the assumptions of Theorem 3.1 be fulfilled. Then, since the $\lambda$-truncation $P$ of $F$ is unique, the map $\Delta$ is uniquely defined because $\Delta(x-\overline x) \equiv F(x)-(F(\overline x)+P(x- \overline x))$, $x\in \mathbb{R}^n$.

Let $F=(F_1,\dots,F_m)\colon \mathbb{R}^n\to \mathbb{R}^m$ be an arbitrary polynomial map, so that each $F_i$ is a polynomial. Also let $\lambda \geqslant 0$ be an $n$-dimensional vector such that for every index $i$ there exists an $n$-dimensional vector $s_i$ satisfying $\langle \lambda, s_i \rangle >0$. Here $p_i x^{s_i}$, $p_i\ne 0$, is a monomial involved in $F_i$ after the reduction of monomials of like degrees. In other words, each $F_i$ is a sum of the nontrivial monomial $p_i x^{s_i}$ and a finite number of monomials of degrees distinct from $s_i$.

It is easy to see from Definition 2.1 that this polynomial map $F$ has a $\lambda$-truncation $P$. On the other hand this $\lambda$-truncation is not necessarily regular in any direction $h \in \mathbb R^n$. Thus, in view of Theorem 2.1 and Theorems 3.2 and 3.3 established below we must only look for $\lambda$-truncations that are regular in some (own) direction $h$.

The next example shows that even a linear map $F$ can have different $\lambda$-truncations for different $\lambda >0$, each of which satisfies the regularity condition in an own direction $h$. For $m=1$ consider the linear map $F\colon \mathbb R^2 \to \mathbb R$, $F(x)=x_1+x_2$. Then $F(0)=0$. For $\lambda=(1,2)$ this $F$ has the $\lambda$-truncation $P(x)=x_1$ in a neighbourhood of the origin, for which $\alpha_1=1$, $\Delta(x)=x_2$ and $d=(0,1)$. It is easy to see that this $\lambda$-truncation is regular in the zero direction $h=(0,0)$ (details are available in [14], § 3). In a similar way, for $\lambda=(2,1)$ the map $F$ has the $\lambda$-truncation $P(x)=x_2$. It is also regular in the zero direction for $\alpha_1=1$, $\Delta(x)=x_1$ and $d=(1,0)$.

Now we show how we can generalize Theorem 2.1 by replacing the assumption $\lambda>0$ by $\lambda \geqslant 0$ and, generally speaking, by improving the a priori estimate for the solution.

For $\lambda=(\lambda_1,\dots,\lambda_n) \geqslant 0$ set

$$ \begin{equation*} J=J(\lambda):=\{k \in \{1,\dots, n\}\colon \lambda_k=0\}. \end{equation*} \notag $$

In this definition $J=\varnothing$ corresponds to $\lambda >0$.

Theorem 3.2. Given a vector $\lambda\geqslant 0$, let $F$ be a continuous map, and let the polynomial map $P=P^{S,\mathcal{P}}$ be its $\lambda$-truncation in a neighbourhood of a point $\overline x$, which is regular in some direction $h \in \mathbb R^n$.

Then there exist a positive number $\gamma$, a neighbourhood $O$ of the point $\overline y=F(\overline x)$ and $c>0$ that satisfy the following conditions. For any $y \in O$ and $p\in [0,\gamma]$ there exists a solution $x=x(y,p)$ of the equation

$$ \begin{equation} F(x)=y, \end{equation} \tag{3.1} $$
that satisfies the a priori estimates
$$ \begin{equation} |x_k(y,p)-\overline x_k| \leqslant c \sum_{i=1}^m|y_i- \overline y_i|^{\lambda_k/(\alpha_i+p)} \quad \textit{for } k\notin J, \end{equation} \tag{3.2} $$
$$ \begin{equation} |x_k(y,p)-\overline x_k| \leqslant c \sum_{i=1}^m |y_i- \overline y_i|^{p/(\alpha_i+p)} \quad \textit{for } k \in J , \quad p>0, \end{equation} \tag{3.3} $$
and
$$ \begin{equation} |x_k(y,0)- \overline x_k| \leqslant c \quad \textit{for } k \in J, \quad p=0. \end{equation} \tag{3.4} $$
Here the positive numbers $\alpha_i $ are as in (2.1).

Remark 3.2. If the map $F$ is continuously differentiable in a neighbourhood of the point $\overline x$ and this point is normal, that is, $F'(\overline x)\mathbb{R}^n=\mathbb{R}^m$, then the assumptions of Theorem 3.2 are fulfilled. To see this we can set $P(x)\equiv F'(\overline x)x$, take the vector with all $n$ components equal to one as $\lambda$, and take the zero vector as $h$. Thus, Theorem 3.2 yields the classical inverse function theorem (see [2] and [14] for details). Theorem 3.2 makes sense also when the point $\overline x$ is abnormal, that is, $F'(\overline x)\mathbb{R}^n \neq \mathbb{R}^m$. For instance, this can be seen from Example 5.1 in § 5.

Now we discuss the statement of Theorem 3.2. It follows from (3.2) and (3.3) that for each fixed $p \in (0, \gamma]$ the map $x(\,\cdot\,,p)$ is continuous with respect to $y$ at $\overline y$.

Now let the assumptions of Theorem 3.2 be fulfilled, and let $\lambda >0$ in addition. Then $J=\varnothing$, so inequalities (3.3) and (3.4) should be dropped. We show that in this case (3.2) is equivalent to the inequality

$$ \begin{equation*} |x_k(y,0)-\overline x_k| \leqslant c \sum_{i=1}^m|y_i- \overline y_i|^{\lambda_k/{\alpha_i}}, \quad k \in \{1,\dots, n\}, \quad\forall \, y \in O, \end{equation*} \notag $$
which is obtained from (3.2) for $p=0$.

In fact, it is obvious that this inequality is necessary. We show that it is sufficient. If the neighbourhood $O$ is sufficiently small, then $|y_i-\overline y_i| \leqslant 1$ for all $i$. Hence, taking $x_k(y,p):=x_k(y,0)$ it is sufficient to show that the function $\lambda_k/(\alpha_i+p)$ achieves it maximum in $p \in [0,\gamma]$ for $p=0$. However, this follows from the fact that for $\lambda_k >0$, $\alpha_i >0$ and $p \geqslant 0$, the smaller the denominator of the fraction, the greater the value of the fraction. On the other hand (as follows from the argument after Example 5.1 below) we cannot improve the result of Theorem 3.2 by setting $p=0$ on the right-hand side of (3.2) without making the assumption $\lambda>0$.

Theorem 2.1 itself follows from Theorem 3.2 for $p=0$ and an empty set $J$. In fact, let $\lambda >0$ and $p=0$. If $|y_i-\overline y_i| \leqslant 1$ for each $i=1,\dots,m$, then $|y_i- \overline y_i|^{\lambda_k/\alpha_i} \leqslant |y_i- \overline y_i|^{\lambda_k/\alpha}$ for all $i$ as above and $k=1,\dots,n$. Here $\alpha=\max \{\alpha_1,\dots, \alpha_m\}$. However, no coordinate of an $m$-dimensional vector can exceed the Euclidean norm of this vector, so $|y_i - \overline y_i|^{\lambda_k/\alpha} \leqslant |y\,{-}\, \overline y|^{\lambda_k/\alpha}$. Hence from inequality (3.2) in Theorem 3.2 we obtain $|x_k(y,0) - \overline x_k| \leqslant m c|y - \overline y|^{\lambda_k/\alpha}$. Since $x_k(y,0) \equiv x_k(y)$, this is just the estimate in Theorem 2.1. Thus we have shown that Theorem 2.1 is a consequence of Theorem 3.2.

Even for $\lambda >0$ and $p=0$ Theorem 3.2 does not follows from Theorem 2.1 if ${m\geqslant 2}$ and there exists $i$ such that $\alpha > \alpha_i$.

For example, let

$$ \begin{equation*} \begin{gathered} \, n=3, \qquad m=2, \qquad \overline x=0, \qquad \overline y=0, \qquad F(x)=(x_1-x_2,x_1^3-x_2^2x_3), \\ \lambda=(1,1,1)\quad\text{and} \quad h=(1,1,1). \end{gathered} \end{equation*} \notag $$
Then for $\alpha_1=1$ and $\alpha_2=3$ the assumptions of Theorem 3.2 are fulfilled. Hence the following is a consequence of estimate (3.2) from the theorem for $p=0$ and $y_2=0$. For all $y_1$ sufficiently close to the origin the system of equations $F_1(x)=y_1$, $F_2(x)=0$ has solutions $x(y_1,0)$ satisfying the linear estimate $|x(y_1,0)| \leqslant \mathrm{const} |y_1|$. On the other hand Theorem 2.1 yields only an estimate with radical: $|x(y_1,0)| \leqslant \mathrm{const} \sqrt[3]{|y_1|}$, because here $\alpha=3$.

Theorem 2.1 cannot be used if even one coordinate of the vector $\lambda \geqslant 0$ is zero. It is all the less applicable if for the smooth map $F$ in question there do not exist a vector $\lambda >0$ and a polynomial map $P$ that is a $\lambda$-truncation of $F$ in a neighbourhood of the origin which is regular in some direction $h$. On the other hand there exists $\lambda \geqslant 0$ to which we can apply Theorem 3.2. This is the case of Example 5.1 at the end of the paper.

The example below shows the following. In Definition 2.2 of regularity in direction $h=(h_1,\dots,h_n)$, even in the case when (2.5) holds the assumption that if $\lambda_k=0$, then $h_k=0$, is essential and cannot be dropped. Thus, we cannot improve Theorem 3.2.

Example 3.1. Let $n=3$, $m=2$, $P(x)=F(x)=(x_1x_2-x_1x_2x_3,x_1-x_2)$, $\overline x=0$, $\overline y=0$,

$$ \begin{equation*} s_{1,1}=(1,1,0), \qquad s_{1,2}=(1,1,1), \qquad s_{2,1}=(1,0,0), \qquad s_{2,2}=(0,1,0) \end{equation*} \notag $$
and $j_1=j_2=2$.

Set $\lambda:=(1,1,0)$ and $h:=(1,1,1)$. Then equalities (2.1) hold for $\alpha_1=2$ and $\alpha_2=1$, and conditions (2.5) are satisfied (whereas the assumption that $h_3=0$ in this case does not hold now).

Now consider the system of equations

$$ \begin{equation*} F(x)=y \end{equation*} \notag $$
with respect to $x=(x_1,x_2,x_3)\in \mathbb{R}^3$ with parameter $y=(y_1,0)\in \mathbb{R}^2$, where $y_1<0$. For each solution $x$ of this system we have the following. It follows from the second equation that $x_1=x_2$. Hence the first equation takes the form $x_1^2(1-x_3)=y_1$. From this, as $y_1<0$, we obtain $x_3>1$. Thus, for any fixed $p>0$ the equation $F(x(y,p))=y$ in the above example has no solution $x(y,p)$ continuous at $\overline y$, that is, the map $x(y,p)$ does not tend to the origin as $y \to 0$. Hence (3.3) fails for $k=3$.

The point here is that in the above example $\lambda_3=0$, but nonetheless $h_3 \neq 0$.

Before we state the next theorem we present a known inequality we need below. Let $a_i$, $i=1,\dots,m$, be nonnegative numbers, and let $r>0$. Then

$$ \begin{equation} \biggl(\sum_{i=1}^m a_i \biggr)^r \leqslant c(r) \sum_{i=1}^m a_i^r, \end{equation} \tag{3.5} $$
where $c(r)$ is defined by
$$ \begin{equation} c(r)=1 \quad \text{for } r \in (0,1]\quad\text{and} \quad c(r)=m^{r-1} \quad \text{for } r>1. \end{equation} \tag{3.6} $$
This inequality is well known. For $r>1$ it follows from the usual Hölder inequality, for $r=1$ it is obvious, while for $r \in (0,1)$ it follows from Jensen’s inequality (for instance, see [15], § 2.10, Theorem 22).

We go over to the smooth version of Theorem 3.2, which can also produce an asymptotic representation for the solution $x(y,p)$ as $y \to \overline y$. Let $\mathcal{F}_{n,m}(\overline x)$ denote the set of maps $F\colon \mathbb{R}^n\to \mathbb{R}^m$ satisfying the following condition. For each positive integer $N$ there exists a neighbourhood $O_N(\overline x)\subset \mathbb{R}^n$ of the point $\overline x$ such that $F$ is $N$-fold continuously differentiable in the open set $O_N(\overline x)$.

Of course, $\mathcal{F}_{n,m}(\overline x)$ contains all maps that are infinitely smooth in some neighbourhood $O(\overline x)\subset \mathbb{R}^n$. However, $\mathcal{F}_{n,m}(\overline x)$ is wider than the class of such maps because the neighbourhoods $O_N(\overline x)$ can shrink to the point $\{\overline x\}$ as $N\to\infty$.

Theorem 3.3. Given an integer vector $\lambda\geqslant 0$, let $F\in \mathcal{F}_{n,m}(\overline x)$ be a map and $P=P^{S,\mathcal{P}}$ be a polynomial map that is the $\lambda$-truncation of $F$ in a neighbourhood of a fixed point $\overline x$ and is regular in some direction $h \in \mathbb R^n$.

Then there exist numbers $\varepsilon \in (0,1]$ and $\gamma \in (0,1]$ and a map $\omega=(\omega_1,\dots, \omega_n)$ defined and continuously differentiable in a neighbourhood of the compact set $[-\varepsilon,\varepsilon] \times B(0,\varepsilon) \subset \mathbb R \times \mathbb R^m$ and taking values in $\mathbb{R}^n$ such that $\omega(0,0)=0$, $\omega \in \mathcal{F}_{1+m,n}(0)$, and the following assertion holds.

Let $\nu_i \geqslant \alpha_i$, $i=1,\dots,m$, where, as above, the positive numbers $\alpha_i$ are as in (2.1). Then there exists a neighbourhood $O$ of the point $\overline y=F(\overline x)$ such that for all $y \in O$ and $p \in [0,\gamma]$ there exists a solution $x(y,p) = (x_1(y,p),\dots,x_n(y,p))$ of (3.1), that is, of the equation

$$ \begin{equation*} F(x)=y, \end{equation*} \notag $$
such that for $k=1,\dots,n$
$$ \begin{equation} \begin{aligned} \, \notag x_k(y,p) &=\overline x_k +\biggl(d \sum_{i=1}^{m}|y_i-\overline y_i|^{1/(\nu_i+p)}\biggr)^{\lambda_k} \\ &\qquad\times \biggl(h_k+\omega_k\biggl(d \sum_{i=1}^{m}|y_i-\overline y_i|^{1/(\nu_i+ p)},g(y-\overline y,p)\biggr)\biggr), \end{aligned} \end{equation} \tag{3.7} $$
where $d >0$ is a fixed sufficiently large number such that just $md \geqslant 1$ and ${d^{\alpha_i} \,{\geqslant}\, m/\varepsilon}$, $i=1,\dots,m$, and where the convention $0^0=1$ holds in accordance with the above.

Here for $y\neq \overline y$

$$ \begin{equation*} \begin{aligned} \, g(y-\overline y,p) &=\biggl((y_1-\overline y_1) \biggl(d \sum_{i=1}^{m}|y_i-\overline y_i|^{1/(\nu_i+p)} \biggr)^{-\alpha_1}, \\ &\qquad\qquad\dots, (y_m-\overline y_m) \biggl(d \sum_{i=1}^{m}|y_i-\overline y_i|^{1/(\nu_i+p)} \biggr)^{-\alpha_m} \biggr) \end{aligned} \end{equation*} \notag $$
and $g(0,p) \equiv 0$.

We look closer at representation (3.7). Let $k \in \{1,\dots,n\}$, and let the assumptions of Theorem 3.3 be fulfilled. Also assume that at least one of the two conditions holds: either $\nu_i > \alpha_i$ for all $i=1,\dots,m$, or $p>0$. We show that then the values of the functions $\omega_k$ in (3.7) tend to zero as $y\to \overline y$. In turn, this yields in (3.7) an asymptotic representation for the solution $x_k(y)$ as $y \to \overline y$, because by Definition 2.2, if $\lambda_k=0$, then also $h_k=0$. Here $0^0=1$ in accordance with the above convention.

We prove that $\omega_k \to 0$ as $y\to \overline y$ under the above assumptions. In fact, as the functions $\omega_k$ are smooth, it is sufficient to show that $g(y- \overline y,p)\to 0$ as $y\to \overline y$. From our additional assumption we obtain $(\nu_i-\alpha_i)+p >0$ for all $ i$. Hence the inequalities $((\nu_i- \alpha_i)+p)/(\nu_i+p) >0$ hold for all indices $i=1,\dots,m$.

Consider some $y \in O$, $y\ne 0$, and assume that $\overline y=0$ for simplicity. Then if $y_1\ne 0$, so that the first coordinate of $g(y,p)$ is nonzero, then its modulus satisfies the chain of relations

$$ \begin{equation} \begin{aligned} \, \notag |g_1(y,p)| &=|y_1| \biggl(d\sum_{i=1}^{m}|y_i|^{1/(\nu_i+p)} \biggr)^{-\alpha_1} = d^{-\alpha_1} \frac{|y_1|}{(\sum_{i=1}^{m} |y_i|^{1/(\nu_i+p)})^{\alpha_1}} \\ &\leqslant d^{-\alpha_1} \frac{|y_1|}{(|y_1|^{1/(\nu_1+p)})^{\alpha_1}} =d^{-\alpha_1}|y_1| |y_1|^{-\alpha_1/(\nu_1+p)} \nonumber \\ &=d^{-\alpha_1} |y_1|^{((\nu_1 -\alpha_1)+p)/(\nu_1+p)}. \end{aligned} \end{equation} \tag{3.8} $$
Here the inequality holds because when the denominator gets smaller, the ratio can only get larger. On the other hand, if $y_1=0$, then the left- and right-hand sides of the chain coincide. However, it follows from our assumptions that the right-hand side of (3.8) tends to zero as $y\to 0$. Hence the first coordinate of $g(y,p)$ also tends to zero as $y\to 0$.

Similar arguments for all other coordinates complete the proof for $y\neq 0$, while if $y=0$, then $g(0,p)=0$ by definition. As a result, we see that $g(y-\overline y,p)\to 0$ as ${y\to \overline y}$.

Remark 3.3. Under the assumptions of Theorem 3.3 we have established the following result for the set $O$ and $\gamma \in(0,1]$ constructed in the proof. For all $y \in O$ and $p \in [0, \gamma]$ we have

$$ \begin{equation*} t(y,p):=d \sum_{i=1}^{m}|y_i-\overline y_i|^{1/(\nu_i+p)} \leqslant \varepsilon \end{equation*} \notag $$
and
$$ \begin{equation*} |g(y-\overline y,p)| \leqslant \varepsilon\quad\text{for } |y_i-\overline y_i|<1, \quad i=1,\dots,m. \end{equation*} \notag $$
It particular, it follows that in the above representation (3.7) all composite functions $\omega_k(t(y,p), g(y-\overline y,p))$ are well defined and continuous for $y \ne \overline y$. Moreover, as the continuous functions $\omega_k$ are bounded on the compact set $[-\varepsilon,\varepsilon]\times B (0,\varepsilon)$, the above composite functions are bounded for $(y,p) \in O \times [0,\gamma]$.

Remark 3.4. Under the assumptions of Theorem 3.3 let $\varepsilon\in(0,1]$, $\gamma\in (0,1]$, and let the map $\omega$ be as in the assertion of the theorem. Then even from the special case of Theorem 3.3 when $\nu_i=\alpha_i$ for all $ i$ we obtain the result of Theorem 3.2 and estimates (3.2)(3.4).

In fact, let $O$ be a neighbourhood of $\overline y$ corresponding to the $\alpha_i$ in question by Theorem 3.3.

Fix some $y\in O$ and $p\in [0,\gamma]$. Then $x(y,p)$ solves equation (3.1) by Theorem 3.3.

We show that inequality (3.2) holds for $k\notin J$. In fact, it follows from (3.5) and (3.6) that

$$ \begin{equation} \biggl ( \sum_{i=1}^{m}|y_i-\overline y_i|^{1/(\nu_i+p)}\biggr)^{\lambda_k} \leqslant \mathrm{const} \sum_{i=1}^{m}|y_i-\overline y_i|^{\lambda_k/(\nu_i+p)}. \end{equation} \tag{3.9} $$
Moreover, by Remark 3.3 we have the inequality $|h_k+\omega_k(t(y,p),g(y- \overline y,p))|\leqslant \mathrm{const}$. Hence (3.9) and representation (3.7) yield (3.2) because in our case $\nu_i=\alpha_i$ for all $i$.

Now let $k\in J$. If $y=\overline y$, then relations (3.3) and (3.4) are always satisfied. Suppose that $y\neq \overline y$. Then $\lambda_k=0 \Rightarrow h_k=0$, so that

$$ \begin{equation} \begin{aligned} \, \notag |x_k(y,p)-\overline x_k| &= \biggl|\omega_k\biggl(d \sum_{i=1}^{m} |y_i-\overline y_i|^{1/(\alpha_i+p)}, g(y-\overline y,p)\biggr)\biggr| \\ \notag &\leqslant \mathrm{const} \biggl( \sum_{i=1}^{m} |y_i-\overline y_i|^{1/(\alpha_i+p)} +|g(y-\overline y,p)| \biggr) \\ &\leqslant \mathrm{const} \biggl( \sum_{i=1}^{m} |y_i-\overline y_i|^{p/(\alpha_i+p)} +|g(y-\overline y,p)| \biggr). \end{aligned} \end{equation} \tag{3.10} $$
In deducing the equality we used representation (3.7) and the first two inequalities in Remark 3.3; the first inequality follows from the equality $\omega_k(0,0)=0$, the first two inequalities in Remark 3.3 and the Lipschitz continuity of $\omega_k$ on the compact set $[0, \varepsilon]\times B(0,\varepsilon)$, which is in turn the consequence of the continuous differentiability of $\omega_k$ in a neighbourhood of the compact set $[-\varepsilon, \varepsilon]\times B(0,\varepsilon);$ the second inequality holds because $p\leqslant 1$ and $|y_i-\overline y_i|<1$ for each $i$, as follows from Remark 3.3.

Now we estimate the modulus of $g(y-\overline y,p)$. Using arguments similar to the proof of (3.8) we see that the modulus of its $i$th coordinate satisfies the inequality

$$ \begin{equation*} |g_i(y-\overline y,p)| \leqslant d^{- \alpha_i} |y_i- \overline y_i|^{p/(\alpha_i+p)} \leqslant \frac{\varepsilon}{m}|y_i- \overline y_i|^{p/(\alpha_i+p)}. \end{equation*} \notag $$
Here we have used the properties of $d$ and the fact that $\nu_i=\alpha_i$ for all $i$. These inequalities show that
$$ \begin{equation*} |g(y-\overline y,p)| \leqslant \frac{\varepsilon}{m}\sum_{i=1}^{m} |y_i- \overline y_i|^{p/(\alpha_i+p)}. \end{equation*} \notag $$
From (3.10) and the last inequality we deduce (3.3).

Now let $p=0$. Then (3.4) follows from (3.10) in a similar way, because the above inequality means that $|g(y - \overline y,0)|\leqslant \varepsilon$, since $|y_i-\overline y_i|^0=1$ for each $i$. Here we use that $0^0=1$. Thus the remark is valid.

Remark 3.5. Again, let the assumptions of Theorem 3.3 hold. Fix some arbitrary $\nu_i \geqslant \alpha_i$ and $p \in [0,\gamma]$. Assume that at least one of the three additional conditions is satisfied: either $\lambda>0$, or $\nu_i > \alpha_i$ for all $i=1,\dots,m$, or $p>0$. Let the neighbourhood $O$ of $\overline y$ and the map $x(\,\cdot\,, p)$ correspond to the $\nu_i$ as above by Theorem 3.3. Then the map $x(\,\cdot\,, p)$ is continuous in $y \in O$.

In fact, let $\overline x=0$ and $\overline y=0$ for simplicity. First assume that $\lambda>0$. If $y \in O$ and $y \to \widehat y=(\widehat y_1,\dots, \widehat y_m) \ne 0$, then $x(y,p) \to x(\widehat y, p)$ as $y \to \widehat y$ by (3.7) because $\bigl(d \sum_{i=1}^{m}|\widehat y _i|^{1/(\nu_i+p)}\bigr) >0$ and the product of continuous functions is continuous. On the other hand, if $\widehat y=0$, then the first expression in parentheses in (3.7) tends to zero and the second expression in parentheses is bounded in view of Remark 3.3.

Next assume that either $\nu_i > \alpha_i$ for all $i$, or $p >0$. First let $k \notin J$. Repeating the above arguments we see that the function $x_k(\,\cdot\,,p)$ is continuous in $y \in O$. Now let $k \in J$. Then $\lambda_k=0 \Rightarrow h_k=0$, and two cases can occur.

First assume that $\widehat y\ne 0$. Then by Remark 3.3, repeating the above argument again we obtain that the function $g(y,p)$, and therefore also $\omega_k$, is continuous at $\widehat y$. Hence the function $x_k(\,\cdot\,,p)$ is also continuous at this point by (3.7).

Now let $\widehat y=0$. Repeating the arguments similar to the ones in (3.8) we obtain

$$ \begin{equation*} |g_i(y,p)| \leqslant \mathrm{const}\, |y_i|^{((\nu_i -\alpha_i)+p)/(\nu_i+p)} \quad \forall \, y \in O, \quad i=1,\dots,m. \end{equation*} \notag $$
Hence $g(y,p) \to 0=g(0,p)$ as $y \to 0=\widehat y$, which shows that $g(\,\cdot\,,p)$ is continuous at the origin. All other functions in (3.7) are continuous. Hence the function $x_k(\,\cdot\,,p)$ is also continuous at $\widehat y=0$ in view of this representation.

§ 4. Proofs of main results

Proof of Theorem 3.1. We assume for convenience that $\overline x=0$ and $\overline y=F(\overline x)=0$.

Let $P$ and $\widetilde{P}$ be two $\lambda$-truncations of $F$ in a neighbourhood of $\overline x$ which correspond to the same vector $\lambda \geqslant 0$.

Let $S_i=\{s_{i,1},\dots,s_{i,j_i}\}$ denote the set of multi-indices of monomials in $P_i$ and $\widetilde{S}_i = \{\widetilde{s}_{i,1},\dots,\widetilde{s}_{i,\widetilde{j}_i}\}$ denote the set of multi-indices of monomials in $\widetilde{P}_i$, $i=1,\dots,m$. Then there exist positive numbers $\alpha_i$ and $\widetilde{\alpha}_i$ such that $\langle\lambda,s_{i,j}\rangle=\alpha_i$ and $\langle\lambda,\widetilde{s}_{i,j}\rangle=\widetilde{\alpha}_i$ for all $s_{i,j}\in S_i$ and $\widetilde{s}_{i,j}\in \widetilde{S}_{i}$.

Let $W$ the unit neighbourhood of the origin in $\mathbb{R}^n$. For $t \in (0,1)$ and $\xi=(\xi_1,\dots,\xi_n)\in W$ set

$$ \begin{equation} x(t,\xi):=(t^{\lambda_1}\xi_1,\dots,t^{\lambda_n}\xi_n). \end{equation} \tag{4.1} $$
Then for $i=1,\dots,m$ we have the identities
$$ \begin{equation} P_i(x(t,\xi)) \equiv t^{\alpha_i} P_i(\xi)\quad\text{and} \quad \widetilde{P}_i(x(t,\xi)) \equiv t^{\widetilde{\alpha}_i} \widetilde{P}_i(\xi), \qquad \xi \in W, \quad t \in (0,1). \end{equation} \tag{4.2} $$
Indeed, we have
$$ \begin{equation*} \begin{aligned} \, P_i(x(t,\xi)) &\equiv\sum_{j=1}^{j_i} \biggl(p_{i,j} \prod_{k=1}^n (t^{\lambda_k}\xi_k)^{s_{i,j,k}}\biggr) \equiv \sum_{j=1}^{j_i} \biggl(p_{i,j} \prod_{k=1}^n t^{\lambda_k s_{i,j,k}} \prod_{k=1}^n \xi_k^{s_{i,j,k}}\biggr) \\ &\equiv\sum_{j=1}^{j_i} \biggl(p_{i,j} t^{\langle \lambda, s_{i,j} \rangle} \prod_{k=1}^n \xi_k^{s_{i,j,k}}\biggr) \equiv \sum_{j=1}^{j_i} \biggl(p_{i,j} t^{\alpha_i} \prod_{k=1}^n \xi_k^{s_{i,j,k}}\biggr) \equiv t^{\alpha_i} P_i(\xi). \end{aligned} \end{equation*} \notag $$
Here the first identity follows from (4.1) and the definition of $P_i$, and the penultimate one follows from (2.1). The second identity in (4.2) is established similarly.

Set

$$ \begin{equation*} \Delta(x):=F(x)-P(x)\quad\text{and} \quad \widetilde{\Delta}(x)=F(x)-\widetilde{P}(x), \qquad x\in \mathbb{R}^n. \end{equation*} \notag $$
We show that we can reduce the neighbourhood $W$ so that for some $\theta_i> \alpha_i $ and $ \widetilde{\theta}_i>\widetilde{\alpha}_i $, $i=1,\dots,m$, we have the inequalities
$$ \begin{equation} |\Delta_i(x(t,\xi))| \leqslant \mathrm{const}\, t^{\theta_i}\quad\text{and} \quad |\widetilde{\Delta}_i(x(t,\xi))| \leqslant \mathrm{const}\, t^{\widetilde{\theta}_i} \quad \forall \, t\in (0,1) \quad \forall \, \xi\in W. \end{equation} \tag{4.3} $$
Here $\Delta=(\Delta_1,\dots,\Delta_m)$ and $\widetilde\Delta=(\widetilde{\Delta}_1,\dots,\widetilde{\Delta}_m)$. We show the existence of the $\theta_i$ because for the $\widetilde{\theta}_i$ the argument is similar.

In fact, let $i \in \{1,\dots,m\}$. Then for all $t \in (0,1)$ and $\xi$ sufficiently close to zero we have

$$ \begin{equation*} |\Delta_i(x(t,\xi))| \stackrel{(2.4), (4.1)}{\leqslant} \mathrm{const} \sum_{d \in D_i} |x(t,\xi)|^d \stackrel{(4.1)}{\leqslant} \mathrm{const} \sum_{d \in D_i} t^{\langle \lambda,d \rangle} \leqslant \mathrm{const} \, t^{\theta_i}. \end{equation*} \notag $$
Here the quantity $\theta_i=\min\{\langle \lambda,d \rangle,\, d \in D_i\}$ is greater than $\alpha_i$ by (2.2). This completes the proof of the existence of numbers $\theta_i$ and a neighbourhood of the origin such that (4.3) holds.

From the definition of $\Delta$ and $\widetilde{\Delta}$ it follows that $P(x)-\widetilde{P}(x)\equiv \widetilde{\Delta}(x)-\Delta(x)$, $x\in \mathbb{R}^n$. Hence from (4.2) and (4.3) we obtain

$$ \begin{equation} |t^{\alpha_i}P_i(\xi)-t^{\widetilde{\alpha}_i}\widetilde{P}_i(\xi)| \leqslant \mathrm{const}\, (t^{\theta_i}+t^{\widetilde{\theta}_i}) \quad \forall \,t\in (0,1) \quad \forall\, \xi\in W, \quad i=1,\dots,m. \end{equation} \tag{4.4} $$

We show that $\alpha_i=\widetilde{\alpha}_i$ for all $i=1,\dots,m$. Assume the converse: $\alpha_i\neq \widetilde{\alpha}_i$ for some $i$. For definiteness let $\widetilde{\alpha}_1 > \alpha_1$.

For $t>0$ we divide the first inequality in (4.4) by $t^{\alpha_1}$. Then we obtain

$$ \begin{equation*} |P_1(\xi)-t^{\widetilde{\alpha}_1-\alpha_1}\widetilde{P}_1(\xi)| \leqslant \mathrm{const}\, (t^{\theta_1-\alpha_1}+t^{\widetilde{\theta}_1-\alpha_1}) \quad \forall \,t\in (0,1) \quad \forall\, \xi\in W. \end{equation*} \notag $$
In this inequality the quantities $\widetilde{\alpha}_1-\alpha_1$, $\theta_1-\alpha_1$ and $\widetilde{\theta}_1-\alpha_1$ are positive because $\widetilde{\alpha}_1 > \alpha_1$ by assumption and we have $\theta_1>\alpha_1$ and $\widetilde{\theta}_1> \widetilde{\alpha}_1 > \alpha_1$ by what we said above.

Fix some $\xi\in W$. Taking the limit as $t\to 0+$ in the above inequality, we see that $P_1(\xi)=0$ for all $\xi\in W$. Since $W$ is open and nonempty and the map $P$ is polynomial, we see that $P_1=0$. But a $\lambda$-truncation is a nontrivial map by definition, Thus the assumption that $\widetilde{\alpha}_i \neq \alpha_i$ for some $i$ leads to a contradiction, and we have proved that $\alpha_i=\widetilde{\alpha}_i$ for each $i$.

Next we show that $P=\widetilde{P}$. For $t>0$ we divide all inequalities in (4.4) by $t^{\alpha_i}$. Since $\alpha_i=\widetilde{\alpha}_i$ for each $ i$, we obtain

$$ \begin{equation*} |P_i(\xi)-\widetilde{P}_i(\xi)| \leqslant \mathrm{const} \, (t^{\theta_i-\alpha_i}+t^{\widetilde{\theta}_i-\widetilde{\alpha_i}}) \quad \forall \,t\in (0,1) \quad \forall\, \xi\in W, \quad i=1,\dots,m. \end{equation*} \notag $$
Here all exponents $\theta_i-\alpha_i$ and $\widetilde{\theta}_i-\widetilde{\alpha}_i$ are positive because $\theta_i>\alpha_i$ and $\widetilde{\theta}_i> \widetilde{\alpha}_i$ by the above.

Fix some $\xi\in W$. Taking the limit as $t\to 0+$ in the above inequalities, we obtain $P_i(\xi)-\widetilde{P}_i(\xi)=0$ for all $\xi\in W$ and $i=1,\dots,m$. Hence, as $W$ is open and nonempty and the map $(P-\widetilde{P})$ is polynomial, we have $P= \widetilde{P}$. In particular, $j_i=\widetilde{j}_i$ and $S_i=\widetilde{S}_i$ for $i=1,\dots,m$.

Theorem 3.1 is proved.

Proof of Theorem 3.2. Again, assume that $\overline x=0$ and $\overline y=F(\overline x)=0$. Set $P=P^{S,\mathcal P}$. By Definition 2.2 the point $h$ is normal for $P$, that is, $P(h)=0$ and $\operatorname{im} P'(h)=\mathbb R^m$.

Consider the positive numbers $\varepsilon_0 $ and $a$ corresponding to $P$ and $h$ by Proposition 1 in [13]. This means that for each $\varepsilon\in (0,\varepsilon_0]$, each $\eta \in B(0,\varepsilon a)$ and any continuous map $\Phi\colon B(0,\varepsilon) \to \mathbb{R}^m$ such that $|\Phi(\xi)|\leqslant \varepsilon a$ for all $\xi$, the equation $P(h + \xi) + \Phi(\xi)=\eta$ has a solution $\xi \in B(0,\varepsilon)$.

First fix some $t_0 \in (0,1)$ and consider vectors $y=y(t,\eta)$ of the form $y(t,\eta)=(t^{\alpha_1} \eta_1,\dots,t^{\alpha_m} \eta_m)$, where

$$ \begin{equation*} \eta=(\eta_1,\dots, \eta_m) \in \mathbb R^m\colon |\eta| \leqslant \varepsilon_0 a, \qquad t \in (0,t_0]. \end{equation*} \notag $$

We solve (3.1) for right-hand sides $y$ of the form $y=y(t,\eta)$. To do this let $W$ be the unit neighbourhood of the origin in $\mathbb{R}^n$. Let the map $x(\,\cdot\,{,}\,\cdot\,)\colon (0, t_0] \times W\to \mathbb{R}^n$ be defined by

$$ \begin{equation} \begin{gathered} \, x(t,\xi)=(x_1(t,\xi),\dots,x_n(t,\xi)), \qquad x_k(t,\xi) :=t^{\lambda_k}(h_k+\xi_k), \\ k=1,\dots,n, \qquad t \in (0,t_0], \qquad \xi=(\xi_1,\dots,\xi_n) \in W. \end{gathered} \end{equation} \tag{4.5} $$
Then from the assumption that $h_k=0$ for all $k \in J$ we obtain that for $t\in (0,t_0]$ we have
$$ \begin{equation} x_k(t,\xi) \equiv \xi_k, \qquad \xi\in W, \quad\forall \, k\in J. \end{equation} \tag{4.6} $$

We seek a solution of (3.1) for $y=y(t,\eta)$ in the form $x=x(t,\xi)$, where for any fixed $t$ and $\eta$ the value of $\xi$ is unknown. In coordinates the equation looks like

$$ \begin{equation} F_i(x(t,\xi))=t^{\alpha_i} \eta_i, \qquad i=1,\dots,m. \end{equation} \tag{4.7} $$

Now we transform it. Fix $i \in \{1,\dots,m\}$. By (2.1), for each $j=1,\dots,j_i$ we have $\langle \lambda,s_{i,j} \rangle=\alpha_i$.

For all $t \in (0,t_0]$

$$ \begin{equation} P_i(x(t,\xi))\equiv t^{\alpha_i} P_i(h+\xi), \qquad \xi \in W, \end{equation} \tag{4.8} $$
where $x(t,\xi)$ is defined by (4.5). To verify this identity we can repeat, with an obvious modification, all calculations in the corresponding part of the proof of Theorem 3.1. Namely, the reference to (4.1) must be replaced by a reference to (4.5), that is, we plug in $(h+\xi)$ in place of $\xi$ where appropriate.

Let $\Delta = (\Delta_1,\dots,\Delta_m)\colon \mathbb{R}^n\to \mathbb{R}^m$ be the map in (2.3). Using (4.5) we set $\widetilde\Delta_i(t,\xi)=\Delta_i(x(t,\xi))$. Since the map $F$ is continuous, the map $\Delta$ is too by (2.3). Hence by (4.5), reducing $t_0>0$ and the neighbourhood of the origin $W$ we obtain the following. First, for each $t\in (0,t_0]$ the map $\widetilde{\Delta}(t,\,\cdot\,):=(\widetilde{\Delta}_1(t,\,\cdot\,),\dots, \widetilde{\Delta}_m(t,\,\cdot\,))$ is continuous in $\xi\in W$; second, for all $t\in (0,t_0]$ and $\xi\in W$ inequality (2.4) holds at $x=x(t,\xi)$.

Set

$$ \begin{equation} \gamma_i:=\min \{\langle \lambda,d \rangle \colon d \in D_i\}-\alpha_i, \quad i=1,\dots,m, \qquad \gamma:=\min_{i=1,\dots,m} \gamma_i. \end{equation} \tag{4.9} $$
Since $P$ is a $\lambda$-truncation of $F$ in a neighbourhood of the origin, it follows that $\gamma_i >0$ for all $i$. Therefore, $\gamma >0$ because the set of numbers $\gamma_i$ is finite.

Again, let $i \in \{1,\dots,m\}$. Then for all $\xi \in W$ and $t \in (0,t_0]$ we have

$$ \begin{equation*} |\widetilde\Delta_i(t,\xi)| \stackrel{(2.4)}{\leqslant} \mathrm{const} \sum_{d \in D_i} |x(t,\xi)|^d \stackrel{(4.5)}{\leqslant} \mathrm{const} \sum_{d \in D_i} t^{\langle \lambda,d \rangle} \stackrel{(4.9)}{\leqslant} \mathrm{const} \, t^{\alpha_i+\gamma_i}. \end{equation*} \notag $$
By (4.9) it follows from these inequalities that
$$ \begin{equation*} t^{-\alpha_i}|\widetilde\Delta_i(t,\xi)| \leqslant \mathrm{const} \, t^{\gamma_i} \leqslant \mathrm{const} \, t^{\gamma}. \end{equation*} \notag $$
Thus, there exists a bounded neighbourhood of the origin in $\mathbb R^n$ such that for all $i$
$$ \begin{equation} t^{-\alpha_i} \sup\{|\widetilde\Delta_i(t,\xi)| \colon \xi \in W\} \leqslant \mathrm{const} \, t^{\gamma} \quad \forall \, t \in(0,t_0]. \end{equation} \tag{4.10} $$

By representation (2.3) and identity (4.8) equation (4.7) under consideration assumes the following form in coordinates:

$$ \begin{equation*} t^{\alpha_i} P_i(h+\xi)+\widetilde\Delta_i(t,\xi)=t^{\alpha_i}\eta_i, \qquad i=1, \dots,m. \end{equation*} \notag $$
Dividing each equation by $t^{\alpha_i}$ for $t\in (0,t_0]$ we obtain
$$ \begin{equation*} P_i(h+\xi)+\Phi_i(t,\xi)=\eta_i, \qquad i=1, \dots,m. \end{equation*} \notag $$
Here $\Phi_i(t,\xi):=t^{-\alpha_i} \widetilde \Delta_i(t,\xi)$ for $t\in (0,t_0]$. Hence for each $t\in (0,t_0]$ the functions $\Phi_i(t,\,\cdot\,)$ are continuous in $\xi\in W$. Set $\Phi=(\Phi_1,\dots,\Phi_m)$.

Thus, for arbitrary $t\in (0,t_0]$ and $\xi\in W$ we obtain the following vector-valued equation with respect to $\xi$:

$$ \begin{equation} P(h+\xi)+\Phi(t,\xi)=\eta. \end{equation} \tag{4.11} $$

It follows from (4.10) that

$$ \begin{equation} \varphi_i(t):=\sup\{|\Phi_i(t,\xi)|\colon \xi \in W\} \leqslant \mathrm{const} \, t^{\gamma} \quad \forall\, t \in(0,t_0]. \end{equation} \tag{4.12} $$
Set $\varphi=(\varphi_1,\dots,\varphi_m)$. By (4.12) there exists $c_0>0$ such that
$$ \begin{equation} |\varphi(t)| \leqslant c_0 t^{\gamma} \quad \forall\, t \in(0,t_0]. \end{equation} \tag{4.13} $$

Reducing $\varepsilon_0 > 0$ we achieve that $B(0,\varepsilon_0) \subset W$. Since (4.12) holds, reducing $t_0 > 0$ again, we achieve that $ a^{-1} |\varphi(t)| \leqslant \varepsilon_0$ for $ t \in (0,t_0]$.

Considering $t\in (0,t_0]$ and $\eta$ such that $|\eta| \leqslant \varepsilon_0 a$ we set

$$ \begin{equation*} \varepsilon(t,\eta) :=a^{-1} \max\{|\varphi(t)|,\,|\eta|\}. \end{equation*} \notag $$
Then for all $t \in (0,t_0]$, $\xi \in W$ and $\eta \in B(0,\varepsilon_0 a)$ we have
$$ \begin{equation*} \varepsilon:=\varepsilon(t,\eta) \leqslant \varepsilon_0, \qquad |\Phi(t,\xi)| \leqslant |\varphi(t)| \leqslant \varepsilon a\quad\text{and} \quad |\eta| \leqslant \varepsilon a. \end{equation*} \notag $$

By Proposition 1 in [13] it follows from the above inequalities that for any fixed $t \in (0,t_0]$ and $\eta \in B(0,\varepsilon_0 a)$ equation (4.11) has a solution $\xi(t,\eta)$ such that

$$ \begin{equation} |\xi(t,\eta)| \leqslant \varepsilon=\varepsilon(t,\eta)= a^{-1} \max\{|\varphi(t)|,|\eta|\}. \end{equation} \tag{4.14} $$

Now we construct the required neighbourhood of the origin $O\subset \mathbb{R}^m$ and positive number $\mathrm c$. Reducing $\varepsilon_0>0$ again if necessary we assume that $\varepsilon_0 a\leqslant 1$, so that

$$ \begin{equation} (\varepsilon_0a)^{-1}\geqslant 1. \end{equation} \tag{4.15} $$

Set

$$ \begin{equation*} O:=\biggl\{y\in \mathbb{R}^m\colon \sum_{i=1}^m \biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i} |y_i|^{1/(\alpha_i+\gamma)}< t_0\biggr\} \end{equation*} \notag $$
and
$$ \begin{equation*} c:=\max\biggl\{\frac{c_0}{a} \max_{i} \biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i}, \frac{\varepsilon_0}{m},(|h|+\varepsilon_0)\max_{k, i}\biggl(c(\lambda_k) \biggl(\frac{m}{a\varepsilon_0}\biggr)^{\lambda_k/\alpha_i} \biggr) \biggr\}. \end{equation*} \notag $$
Here $c(\lambda_k)$ is the positive constant from (3.6).

The set $O$ is clearly an open neighbourhood of the origin in $\mathbb{R}^m$. We show that $\gamma$, $c$ and $O$ are as required.

Fix some $y\in O$, $y\neq 0$ and $p\in [0,\gamma]$. It follows from (4.15), the definition of $O$ and the inequality $t_0 < 1$ that $|y_i|<1$ for $i=1,\dots, m$.

First we assume additionally that $p>0$.

Set

$$ \begin{equation} t=t(y,p):=\sum_{i=1}^m \biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i} |y_i|^{1/(\alpha_i+p)}, \quad \eta_i=\eta_i(y,p):=t^{-\alpha_i}y_i, \qquad i=1,\dots,m, \end{equation} \tag{4.16} $$
where
$$ \begin{equation*} \eta=\eta(y,p):=(\eta_1,\dots,\eta_m). \end{equation*} \notag $$
For $t$ and $\eta$ as above we show that
$$ \begin{equation} t\in (0,t_0]\quad\text{and} \quad |\eta| \leqslant a\varepsilon_0. \end{equation} \tag{4.17} $$

The inclusion in (4.17) follows from the obvious inequality $t>0$ and relations

$$ \begin{equation*} t=\sum_{i=1}^m \biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i}|y_i|^{1/(\alpha_i+p)} \leqslant \sum_{i=1}^m \biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i} |y_i|^{1/(\alpha_i+\gamma)}\leqslant t_0. \end{equation*} \notag $$
Here the equality is the definition of $t$, the first inequality holds because $|y_i|<1$ for all $i$ and $p\in [0,\gamma]$, and the second inequality holds by the definition of $O$ and the fact that $y\in O$.

Now we prove the inequality in (4.17). For each $i=1,\dots,m$ we have

$$ \begin{equation} |\eta_i|=t^{-\alpha_i} |y_i| \leqslant\biggl(\biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i} |y_i|^{1/(\alpha_i+p)}\biggr)^{-\alpha_i} |y_i| = \frac{a\varepsilon_0}{m} |y_i|^{p/(\alpha_i+p)}. \end{equation} \tag{4.18} $$
Here the first equality holds by the definition of $\eta_i $, and the inequality holds because by the definition of $t$, for each $i$ we have
$$ \begin{equation*} t\geqslant\biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i} |y_i|^{1/(\alpha_i+p)} \end{equation*} \notag $$
and also $t \leqslant 1$. On the other hand, since $|y_i|<1$ for each $i$, it follows from (4.18) that $|\eta_i|\leqslant a\varepsilon_0/m$ for each $i$. Hence the inequality in (4.17) is true.

In view of (4.17), for $t$ and $\eta$ introduced in (4.16) the quantity $\xi(t,\eta)$ defined above solves equation (4.11). However, for these $t$ and $\eta$, as shown above with the help of (2.3) and (4.8), the vector-valued equation (4.11) is equivalent to the coordinatewise equation (4.7). Hence the same map $\xi(t,\eta)$ solves equation (4.7). By (4.16) we have $y_i=t^{\alpha_i}\eta_i$ for each $ i$. Hence $F_i(x(t,\xi(t,\eta)))=y_i$ for each $i$. Therefore, $x(y,p)=x(t,\xi(t,\eta))$ is a solution of (3.1). Here the dependence of $t$ and $\eta$ on $(y,p)$ is expressed by (4.16).

Now we prove (3.3). We have

$$ \begin{equation*} \begin{aligned} \, |\xi(t,\eta)| &\leqslant \frac{1}{a}\max\{|\varphi(t)|,|\eta|\} \leqslant\frac{1}{a} \max\bigl\{c_0 t^{\gamma},|\eta|\bigr\} \leqslant \frac{1}{a} \max\bigl\{c_0 t^{p}, |\eta|\bigr\} \\ &\leqslant\frac{1}{a} \max\biggl\{c_0 t^{p},\frac{a\varepsilon_0}{m} \sum_{i=1}^m |y_i|^{p/(\alpha_i+p)} \biggr\} \\ &=\frac{1}{a} \max\biggl\{c_0 \biggl( \sum_{i=1}^m \biggl(\frac{m}{a\varepsilon_0} \biggr)^{1/\alpha_i} |y_i|^{1/(\alpha_i+p)}\biggr)^p, \frac{a\varepsilon_0}{m} \sum_{i=1}^m|y_i|^{p/(\alpha_i+p)} \biggr\} \\ &\leqslant \frac{1}{a} \max\biggl\{c_0 \sum_{i=1}^m \biggl(\frac{m}{a\varepsilon_0}\biggr)^{p/\alpha_i}|y_i|^{p/(\alpha_i+p)}, \frac{a\varepsilon_0}{m} \sum_{i=1}^m |y_i|^{p/(\alpha_i+p)} \biggr\} \\ &\leqslant \frac{1}{a} \max\biggl\{c_0 \sum_{i=1}^m \biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i}|y_i|^{p/(\alpha_i+p)}, \frac{a\varepsilon_0}{m} \sum_{i=1}^m |y_i|^{p/(\alpha_i+p)} \biggr\} \\ &\leqslant c \sum_{i=1}^m |y_i|^{p/(\alpha_i+p)}. \end{aligned} \end{equation*} \notag $$
Here the first inequality follows from (4.14), the second from (4.13), the third from the fact that $t\in(0,1)$ and $p \in (0, \gamma]$, the fourth from (4.18), the equality follows from the definition (4.16) of $t$, the fifth inequality follows from (3.5) and (3.6) because $p\leqslant 1$, the penultimate one from (4.15) and the relation $p\leqslant 1$ again, and the last inequality follows from the definition of the positive constant $c$.

By (4.6) it follows from the estimate just established that for each $k\in J$ we have

$$ \begin{equation*} |x_k(y,p)|=|x_k(t,\xi(t,\eta))|=|\xi_k(t,\eta)| \leqslant c \sum_{i=1}^m |y_i|^{p/(\alpha_i+p)}, \end{equation*} \notag $$
so that estimate (3.3) holds for $y\neq 0$.

Next we prove (3.2). For $k \notin J$ we have

$$ \begin{equation*} \begin{aligned} \, |x_k(y,p)| &=|x_k(t,\xi(t,\eta))|=|t|^{\lambda_k}|h_k+\xi_k(t,\eta)| \\ &\leqslant |t|^{\lambda_k} (|h|+\varepsilon) \leqslant (|h|+\varepsilon_0) \biggl( \sum_{i=1}^m \biggl(\frac{m}{a\varepsilon_0}\biggr)^{1/\alpha_i} |y_i|^{1/(\alpha_i+p)} \biggr)^{\lambda_k} \\ & \leqslant (|h|+\varepsilon_0) c(\lambda_k) \biggl( \max_{i} \biggl(\frac{m}{a\varepsilon_0}\biggr)^{\lambda_k/\alpha_i} \biggr) \sum_{i=1}^m |y_i|^{\lambda_k/(\alpha_i+p)} \\ &\leqslant c\sum_{i=1}^m |y_i|^{\lambda_k/(\alpha_i+p)}. \end{aligned} \end{equation*} \notag $$
Here the second equality follows from (4.5), the first inequality from (4.14), the second from the formula for $t$ and the relation $\varepsilon \leqslant \varepsilon_0$, the penultimate one from (3.5) and (3.6), and the last inequality follows from the definition of $c >0$. Thus, we have proved (3.2) for $p >0$ and $y\ne 0$.

So for $p\in (0,\gamma]$ and $y\ne 0$ the quantity $x=x(y,p)=x(t,\xi(t,\eta))$ we have constructed solves (3.1) and also satisfies (3.2) and (3.3).

Next we construct $x(y,0)$ for $y\ne 0$. Since elements of the set $\{x(y,p)\colon p \in (0,\gamma]\}$ satisfy (3.2), this is a bounded subset of $\mathbb{R}^n$. Hence there exists a sequence of positive numbers $p_N$, $N=1,2,3,\dots$, tending to zero such that $x(y,p_N)$ tends to a point $\chi(y)\in \mathbb{R}^n$. Taking the limit as $N\to+\infty$ in (3.2) and (3.3) for $p=p_N$ we obtain

$$ \begin{equation*} |\chi(y)| \leqslant c \sum_{i=1}^m|y_i|^{\lambda_k/\alpha_i} \quad \text{for } k\notin J, \qquad |\chi(y)| \leqslant c \quad \text{for } k\in J. \end{equation*} \notag $$
Moreover, since $F(x(y,p_N))=y$ and $F$ is continuous, we have $F(\chi(y))=y$. Hence for $y\ne 0$ the quantity $x=x(y,0):=\chi(y)$ we have constructed solves equation (3.1) and satisfies (3.2) for $p=0$ and (3.4).

Thus, for each $y\in O$ away from the origin and each $p\in [0,\gamma]$ we have proved that the required solution $x(y,p)$ exists. For $y=0$ the point $x=0$ is a solution satisfying all conditions in Theorem 3.2. So we complete the proof of the theorem by setting $x(0,p)=0$ for all $p\in [0,\gamma]$.

Theorem 3.2 is proved.

Proof of Theorem 3.3. Again, we assume for convenience that $\overline x=0$ and $\overline y=F(\overline x)=0$. We seek a solution of the equation $F(x)=y$ in the form $x=x(t,\xi)$, where $x(t,\xi)$ is the map defined by
$$ \begin{equation} \begin{gathered} \, x(t,\xi)=(x_1(t,\xi),\dots,x_n(t,\xi)), \\ x_k(t,\xi)=t^{\lambda_k}(h_k+\xi_k) \quad \text{for } k\notin J, \\ x_k(t,\xi)=\xi_k \quad \text{for } k \in J, \\ k=1,\dots,n, \qquad t\in (-1, 1), \qquad \xi=(\xi_1,\dots,\xi_n) \in \mathbb{R}^n. \end{gathered} \end{equation} \tag{4.19} $$
This map $x(t,\xi)$ coincides for $t>0$ with $x(t,\xi)$ defined in (4.5). For $t\leqslant 0$ the map $x(t,\xi)$ is well defined by (4.19) because for $k\notin J$ the $k$th component $\lambda_k$ of $\lambda$ is a positive integer.

For fixed $t$ and $y$ the equation with respect to $\xi$ has the form $F(x(t,\xi))=y$. Now we transform its left-hand side.

Consider some $i=1,\dots,m$. By (2.1) we have $\langle \lambda,s_{i,j}\rangle=\alpha_i$ for each $j=1,\dots,j_i$. Then, as in the proof of Theorem 3.2, we see that for $t \in (-1,1)$ we have the identity

$$ \begin{equation} P_i(x(t,\xi))\equiv t^{\alpha_i} P_i(h+\xi), \qquad\xi\in\mathbb{R}^n. \end{equation} \tag{4.20} $$

Let $\Delta=(\Delta_1,\dots,\Delta_m)\colon \mathbb{R}^n\,{\to}\, \mathbb{R}^m$ be the map in (2.3). Set $\widetilde{\Delta}_i(t,\xi)\,{:=}\,\Delta_i(x(t,\xi))$. Since the map $x(\,\cdot\,{,}\,\cdot\,)$ defined by (4.19) is polynomial and since $\Delta_i\in \mathcal{F}_{n,1}(0)$ by (2.3), we have $\widetilde{\Delta}_i \in \mathcal{F}_{n+1,1}(0)$. Moreover, by (2.2) there exists $\gamma\in(0,1]$ such that $\langle\lambda,d\rangle \geqslant \alpha_i+\gamma$ for all $d\in D_i$. Now, using arguments similar to the ones in the proof of Theorem 3.2 we see that

$$ \begin{equation} |\widetilde{\Delta}_i(t,\xi)|\leqslant \mathrm{const}\, |t|^{\alpha_i+\gamma} \end{equation} \tag{4.21} $$
for all $t$ and $\xi$ close to zero.

We expand $\widetilde{\Delta}_i$ in a Taylor series in powers of $t$ up to order $\alpha_i+1$. To do this fix some $\xi$ sufficiently close to zero. Then

$$ \begin{equation*} \widetilde{\Delta}_i(t,\xi) \equiv \widetilde{\Delta}_i(0,\xi)+\sum_{j=1}^{\alpha_i} \frac{t^j}{j!}\, \frac{ \partial^j \widetilde{\Delta}_i}{\partial t^j} (0,\xi) +R(t,\xi) \end{equation*} \notag $$
and
$$ \begin{equation*} R(t,\xi)\equiv \frac{1}{\alpha_i!}\int_{0}^1 (1-\theta)^{\alpha_i}\, \frac{\partial^{\alpha_i+1} \widetilde{\Delta}_i}{\partial t^{\alpha_i+1}} (\theta t,\xi) \cdot t^{\alpha_i+1}\,d\theta \end{equation*} \notag $$
for $t$ close to zero. Here $R(t,\xi)$ is a remainder term in integral form (for instance, see [16], Theorem 5.6.1).

By (4.21), for $\xi$ close to zero we have the identities

$$ \begin{equation*} \widetilde{\Delta}_i(0,\xi) \equiv 0\quad\text{and} \quad \frac{\partial^j \widetilde{\Delta}_i}{\partial t^j} (0,\xi) \equiv 0, \quad j=1,\dots,\alpha_i. \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} \widetilde{\Delta}_i(t,\xi) \equiv R(t,\xi) \equiv\frac{1}{\alpha_i!}\int_{0}^1 (1-\theta)^{\alpha_i}\, \frac{\partial^{\alpha_i+1} \widetilde{\Delta}_i}{\partial t^{\alpha_i+1}} (\theta t,\xi) \cdot t^{\alpha_i+1}\,d\theta. \end{equation*} \notag $$

Let $\Phi_i(t,\xi)$ be the function defined by $\widetilde{\Delta}_i(t,\xi)\equiv t^{\alpha_i} \Phi_i(t,\xi)$ for $t\neq 0$, and let $\Phi_i(0,\xi)\equiv 0$. Then the following identity holds:

$$ \begin{equation*} \Phi_i(t,\xi)\equiv \frac{1}{\alpha_i!}\int_{0}^1 (1-\theta)^{\alpha_i}\, \frac{\partial^{\alpha_i+1} \widetilde{\Delta}_i}{\partial t^{\alpha_i+1}} (\theta t,\xi) \cdot t\,d\theta, \end{equation*} \notag $$
so that $\Phi_i \in \mathcal{F}_{n+1,1}(0)$. Moreover,
$$ \begin{equation*} |\Phi_i(t,\xi)|\leqslant \mathrm{const}\, |t|^{\gamma} \end{equation*} \notag $$
for all $t$ and $\xi$ close to zero because we have inequality (4.21) and the identities $\widetilde{\Delta}_i(t,\xi)\equiv t^{\alpha_i} \Phi_i(t,\xi)$ for $t\neq 0$ and $\Phi_i(0,\xi)\equiv 0$. Hence for all such $\xi$ and $t$ the map $\Phi=(\Phi_1,\dots,\Phi_m)\in \mathcal{F}_{n+1,m}(0)$ has the estimate
$$ \begin{equation*} |\Phi(t,\xi)|\leqslant \mathrm{const}\, |t|^{\gamma}. \end{equation*} \notag $$
It shows that $\Phi(0,0)=0$ and $\partial \Phi(0,0)/\partial \xi =0$.

Consider the equation

$$ \begin{equation*} P(h+\xi)+\Phi (t,\xi)=z \end{equation*} \notag $$
with respect to $\xi$, with parameters $t$ and $z$. By the above and the regularity condition (2.5), for $(t,z)=0$ the assumptions of the classical implicit function theorem hold for this equation at the point $\xi=0$. Hence we obtain the following result.

Let $\Gamma\subset \mathbb{R}^n$ be the orthogonal complement to the kernel of the linear operator $P'(h)$. Then there exist $\varepsilon \in (0,1)$ and a continuous map $\omega$ defined on $[-\varepsilon,\varepsilon] \times B(0,\varepsilon) \subset \mathbb R \times \mathbb R^m$ and taking values in $\mathbb{R}^n$ such that $\omega(0,0)= 0$ and the identity

$$ \begin{equation} P(h+\omega(t,z))+\Phi (t,\omega(t,z)) \equiv z \end{equation} \tag{4.22} $$
holds, and this implicit function $\omega(t,z)$ is unique. The last assertion means that there exists $\delta>0$ such that if $\omega\in B(0,\delta)\cap \Gamma$, $(t,z)\in [-\varepsilon,\varepsilon] \times B(0,\varepsilon)$ and $P(h+\omega)+\Phi(t,\omega)= z$, then $\omega=\omega(t,z)$. By the above property of the implicit function, since $P$ is analytic and $\Phi\in \mathcal{F}_{n+1,m}(0)$, the classical implicit function theorem yields $\omega \in \mathcal{F}_{1+m,n}(0)$.

Consider the positive vector $\nu=(\nu_1,\dots , \nu_m)$ and number $d$ from the statement of Theorem 3.3. It follows from the definition of $d$ that $m d \geqslant 1$. Set

$$ \begin{equation*} O=\{y \in \mathbb R^m\colon m d |y|^{1/\nu_i} < \varepsilon,\, i=1,\dots,m\}. \end{equation*} \notag $$
Then $O$ is an open neighbourhood of the origin. Furthermore, if $y \in O$, then $|y_i| < 1$ for all $i$ because $\varepsilon \in (0,1]$ by construction.

We show that $O$ is the required set. Fix some $p\in [0,\gamma]$ and $y\in O$. First assume that $y\neq 0$.

For this $y$ and $p$ set

$$ \begin{equation} \begin{gathered} \, t=t(y,p):=\sum_{i=1}^m d |y_i|^{1/(\nu_i+p)}, \\ g(y,p) :=\widetilde g(y,t):= (t^{-\alpha_1} y_1,\dots, t^{-\alpha_m} y_m) \quad\text{for } t=t(y,p). \end{gathered} \end{equation} \tag{4.23} $$
Here we have $t > 0$ because $y\neq 0$ and $d > 0$ by construction.

First we show that $|g(y,p)| \leqslant \varepsilon$. In fact, by analogy with (4.18) we have

$$ \begin{equation*} |\widetilde g_i(y,t)|=t(y,p)^{-\alpha_i} |y_i| \leqslant \bigl(d |y_i|^{1/(\nu_i+p)}\bigr)^{-\alpha_i} |y_i|=d^{-\alpha_i} |y_i|^{((\nu_i-\alpha_i)+p)/(\nu_i+p)}, \end{equation*} \notag $$
where $\widetilde g_i$ is the $i$th coordinate of $\widetilde g$. The verification is essentially the same as in (4.18). From these inequalities we conclude that $|\widetilde g_i(y,t)| \leqslant d^{-\alpha_i}$, because ${y\in O} \Rightarrow |y_i| \leqslant 1 $ and $(\nu_i-\alpha_i)+p \geqslant 0$ for all $ i$. However, $d^{-\alpha_i} \leqslant \varepsilon/m$ for all $i$ by assumption, so the $i$th coordinate of $\widetilde g$ has modulus at most $\varepsilon/m$. Hence the modulus of $\widetilde g(y,t)$ itself is at most $\varepsilon$, and therefore $|g(y,p)| \leqslant \varepsilon$ because $g(y,p)= \widetilde g(y,t)$ for $y \ne 0$ by (4.23) and $g(0,p) \equiv 0$ by definition.

We also show that $t=t(y,p) \leqslant \varepsilon$. In fact, using (4.23) we obtain

$$ \begin{equation*} t(y,p)=d \sum_{i=1}^m |y_i|^{1/(\nu_i+p)} \leqslant d m\max\{|y_1|^{1/\nu_1}, \dots , |y_m|^{1/\nu_m}\}\leqslant \varepsilon \end{equation*} \notag $$
because $y \in O$. Hence the composite map $\omega(t(y,p),g(y,p))$ is well defined.

Set

$$ \begin{equation} x(y,p):=x\bigl(t (y,p), \omega(t(y,p), g(y,p))\bigr). \end{equation} \tag{4.24} $$
The right-hand side is defined by (4.19) for $t=t(y,p)$, and $\xi=\omega(t(y,p), g(y,p))$. We claim that $x(y,p)$ solves the equation $F(x)=y$ and has representation (3.7).

In fact, by (4.22), for $t=t(y,p)$ and $z=g(y,p)$ we have

$$ \begin{equation} P_i(h+\omega(t,z))+\Phi_i (t,\omega(t,z))=t^{-\alpha_i}y_i, \qquad i=1,\dots,m. \end{equation} \tag{4.25} $$
For each $i$ we multiply this by $t^{\alpha_i}$. Then in view of (4.20), for $\xi=\omega(t,z)$ and $\Delta_i(x(t,\omega))\equiv t^{\alpha_i} \Phi_i(t,\omega)$ we obtain
$$ \begin{equation*} P_i(x(y,p))+\Delta_i (x(y,p))=y_i, \qquad i=1,\dots,m. \end{equation*} \notag $$
Hence we see from (2.3) that $F(x(y,p))=y$.

For $y\neq 0$ we deduce representation (3.7) as follows from (4.24). First we must substitute into (4.24) the expressions for $x_k(t,\omega)$, $k=1,\dots,n$, from (4.19) for $\xi=\omega$, and then we plug formulae (4.23) for $t$ and $g$ into the resulting expression.

Now let $y=0$. Set $x(0,p):=0$. Then (3.7) obviously holds, and $x(0,p)$ solves the equation $F(x)=0$.

Theorem 3.3 is proved.

§ 5. Examples and discussions

We present an example of an equation satisfying the assumptions of Theorems 3.2 and 3.3, but not falling in the range of applications of Theorem 2.1.

Example 5.1. Let $n=3$, $m=1$, $\overline x=0 \in \mathbb{R}^3$, $\overline y=0\in \mathbb{R}$, and let

$$ \begin{equation*} F\colon \mathbb{R}^3 \to \mathbb{R}, \qquad F(x)=x_1^4-x_1^3x_2-x_1x_2^3+x_2^4+x_1^4x_3, \quad x=(x_1,x_2,x_3). \end{equation*} \notag $$
For this $F$ the equation $F(x)=y$ assumes the following form:
$$ \begin{equation*} x_1^4-x_1^3x_2-x_1x_2^3+x_2^4+x_1^4x_3=y. \end{equation*} \notag $$

Set $P:=F$ and $\lambda:=(1,1,0)$. Then $\alpha_1=4$ and $P$ is a $\lambda$-truncation of the map $F$ in a neighbourhood of the origin.

Set $h:=(1,1,0)$. We can verify directly that $P(h)=0$ and $P'(h)=(0,0,1)$, so that the linear operator $P'(h)\colon \mathbb{R}^3\to \mathbb{R}$ is surjective. Thus all assumptions of Theorem 3.2 are fulfilled for this example. By the theorem there exist a neighbourhood of the origin $O$ and positive numbers $c$ and $\gamma$ such that for each fixed $p\in [0,\gamma]$ the equation $F(x)=y$ has for all $y\in O$ a solution $x(y,p)$ satisfying (3.2)(3.4) for $\lambda_1=\lambda_2=1$, $\lambda_3=0$, $\alpha=4$ and $J=\{3\}$. The assumptions of Theorem 3.3 are also fulfilled.

On the other hand Theorem 2.1 can in principle not be applied to $F$ at the origin. Consider an arbitrary positive vector $\lambda=(\lambda_1,\lambda_2, \lambda_3)$ and find the $\lambda$-truncation of $F$ in a neighbourhood of the origin. Three cases are possible here.

First assume that $\lambda_2<\lambda_1$. We calculate the values $\langle\lambda,s\rangle$, where $s$ can be any possible multi-index of a monomial in the polynomial $F$. Then we obtain $4\lambda_1$, $3\lambda_1+\lambda_2$, $\lambda_1+3\lambda_2$, $4\lambda_2$ and $4\lambda_1+\lambda_3$. The smallest of these quantities is $4\lambda_2$, which corresponds to $x_2^4$. Hence setting

$$ \begin{equation*} S_1:=\{(0,4,0)\}\quad\text{and} \quad D_1:=\{(4,0,0),\,(3,1,0), \, (1,3,0),\,(4,0,1)\} \end{equation*} \notag $$
we see that (2.2) holds for $\alpha_1=4\lambda_2$. In this case
$$ \begin{equation*} P(x):=x_2^4\quad\text{and} \quad \Delta(x):=x_1^4-x_1^3x_2-x_1x_2^3+x_1^4 x_3, \qquad x\in \mathbb{R}^3, \end{equation*} \notag $$
and relations (2.3) and (2.4) can be verified directly.

Thus, in the case under consideration, as the $\lambda$-truncation of $F$ in a neighbourhood of the origin we obtain $P(x) \equiv x_2^4$. Then the condition $P(h) = 0$, $h=(h_1,h_2,h_3)\in \mathbb{R}^3$, shows that $h_2=0$. For such $h$ we have $P'(h)=0$. Hence the assumption that a regular direction $h$ exists fails in our case.

Now consider the case when $\lambda_2>\lambda_1$. The smallest of the five quantities $4\lambda_1$, $3\lambda_1+\lambda_2$, $\lambda_1+3\lambda_2$, $4\lambda_2$ and $4\lambda_1+ \lambda_3$ is $4 \lambda_1$, which corresponds to $x_1^4$. Therefore, setting

$$ \begin{equation*} S_1:=\{(4,0,0)\}\quad\text{and} \quad D_1:=\{(3,1,0), \, (1,3,0),\,(0,0,4),\,(4,0,1)\} \end{equation*} \notag $$
we see that (2.2) holds for $\alpha_1=4\lambda_1$. Furthermore,
$$ \begin{equation*} P(x):=x_1^4\quad\text{and} \quad \Delta(x):= -x_1^3x_2-x_1x_2^3+x_2^4+x_1^4x_3, \qquad x\in \mathbb{R}^3, \end{equation*} \notag $$
while relations (2.3) and (2.4) are verified directly.

Thus, in this case the $\lambda$-truncation of $F$ in a neighbourhood of the origin has the form $P(x) \equiv x_1^4$. Then it follows from the condition $P(h) = 0$, $h=(h_1,h_2,h_3)\in \mathbb{R}^3$, that $h_1=0$. However $P'(h)=0$ for such $h$. Hence the assumption that there exists a regular direction $h$ fails in this case.

Now let $\lambda_2=\lambda_1$. The first four of the five quantities $4\lambda_1$, $3\lambda_1+\lambda_2$, $\lambda_1+3\lambda_2$, $4\lambda_2$ and $4\lambda_1+\lambda_3$ are the smaller ones. Hence setting

$$ \begin{equation*} S_1:=\{(4,0,0),\, (3,1,0), \, (1,3,0),\,(0,4,0)\}\quad\text{and} \quad D_1:=\{(4,0,1)\}, \end{equation*} \notag $$
we see that (2.2) holds for $\alpha_1=4\lambda_1$. In this case
$$ \begin{equation*} P(x):=x_1^4-x_1^3x_2-x_1x_2^3+x_2^4\quad\text{and} \quad \Delta(x):= x_1^4 x_3, \qquad x\in \mathbb{R}^3, \end{equation*} \notag $$
while (2.3) and (2.4) can be verified by direct calculations.

Thus, in the case under consideration the $\lambda$-truncation of $F$ in a neighbourhood of the origin has the form $P(x) \equiv x_1^4-x_1^3x_2-x_1x_2^3+x_2^4$. It is easy to see that $P(x) \equiv (x_1^2+x_1x_2+x_2^2)(x_1-x_2)^2$. The condition $P(h)=0$, $h=(h_1,h_2,h_3)\in \mathbb{R}^3$, implies that $h_1=h_2$. However, $P'(h)=0$ for such $h$, because the above identity for $P(x)$ means that $P'(x)=0$ for $x_1=x_2$. So the assumption on the existence of a regular direction $h$ also fails in this case.

Thus, for each $\lambda>0$ we have constructed a relevant $\lambda$-truncation, but there are no regular directions for these shortenings. Now, by Theorem 3.1, for each $\lambda>0$ the $\lambda$-truncation of $F$ is unique, so the assumptions of Theorem 2.1, as well as the assumptions of Theorem 2 in [13] are not fulfilled. At the same time, as shown above, Theorems 3.2 and 3.3 can be applied to $F$ at the origin for $\lambda=(1,1,0)$.

Example 5.1 also shows that Theorem 3.2 cannot be improved so that for $p=0$ all functions $x_k(y,0)$ in (3.4) become continuous at $y=\overline y$, that is, so that $x_k(y,0)\to \overline x_k$ as $y\to \overline y$. In this example $x_3(y,0)$ does not tend to $\overline x_3=0$ as $y\to \overline y=0$. Let us show this.

In fact, let $F\colon \mathbb{R}^3\to \mathbb{R}$ be the map in Example 5.1. Set $\overline x:=0$, $P:=F$ and $\lambda:=(1,1,0)$. Then, as shown above, $\overline y=0$, $\alpha=\alpha_1=4$, $J=\{3\}$, and all assumptions of Theorem 3.2 are fulfilled for $h=(1,1,0)$.

Consider a neighbourhood of the origin $O\subset \mathbb{R}$ and $c >0$ as in the claim of Theorem 3.2. Let $x_k=x_k(y,0)$, $k=1,2,3$, be solutions of the equation $F(x)=y$, $y\in O$, that satisfy relations (3.2) or, for $p=0$, (3.4), that is,

$$ \begin{equation*} |x_1|\leqslant c|y|^{1/4}, \quad |x_2|\leqslant c|y|^{1/4}\quad\text{and} \quad |x_3|\leqslant c, \qquad y \in O. \end{equation*} \notag $$

As mentioned above, $F(x)\equiv (x_1^2+x_1x_2+x_2^2)(x_1-x_2)^2+x_1^4x_3$. Since the product of the two expressions in parentheses is nonnegative, for $y<0$, $y\in O$ the solution of $F(x)=y$ satisfies $|y|\leqslant |x_1^4 x_3|$. Hence we obtain from the inequality $|x_1|\leqslant c|y|^{1/4}$ that $ |y|\leqslant |x_1^4 x_3| \leqslant c^{4}|y| |x_3|$. Therefore, $c^{-4} \leqslant |x_3|$. Thus, we have shown that $x_3=x_3(y,0)$ does not tend to zero as $y \to \overline y=0$, $ y <0$. Hence Theorem 3.2 cannot be improved in the way described above.

The same Example 5.1 shows that without the additional assumption $\lambda>0$ Theorem 3.2 cannot be refined by replacing (3.2) by the stronger estimate

$$ \begin{equation} |x_k(y,p)-\overline x_k|\leqslant c \sum_{i=1}^m|y_i-\overline y_i|^{\lambda_k/\alpha_i}, \qquad k\notin J. \end{equation} \tag{5.1} $$
In fact, let $x(y,p)=(x_1,x_2,x_3)$ be the solution of equation $F(x)=y$ from the above example that satisfies (5.1), so that $|x_1| \leqslant c|y|^{1/4}$, $|x_2| \leqslant c|y|^{1/4}$ and, in view of (3.3), $|x_3| \leqslant c |y|^{p/(4+p)}$ for $p>0$. Here $\alpha_1=4$ and $\lambda_1=\lambda_2= 1$, while $\lambda_3=0$.

It follows from the above representation for $F$ that for $y<0$ we have ${|x_1^4x_3| \geqslant |y|}$. However, by the above inequalities we have $|x_1^4x_3| \leqslant c^5 |y| |y|^{p/(4+p)}$. This contradiction shows that Theorem 3.2 cannot be refined in the way described.

Example 5.1 shows that Theorem 3.3 cannot be strengthen either so that all functions $x_k(y,0)$ in representation (3.7) for $p=0$ and $\nu_i=\alpha_i$ for all $ i$ become continuous at the point $y=\overline y$. In fact, the assumptions of Theorem 3.3 are fulfilled in that example. As shown in Remark 3.4, representations (3.7) for the solution $\{x_k(y,0)$, $k=1,2,3\}$ satisfy estimates (3.2) and (3.4) from Theorem 3.2. However, as shown above, $x_3(y,0)$ does not tend to zero as $y\to \overline y=0$, so this function is discontinuous at zero. By same arguments we cannot drop all three conditions in Remark 3.5.


Bibliography

1. G. I. Arkhipov, V. A. Sadovnichii and V. I. Chubarikov, Lectures on mathematical analysis, 4th revised ed., Drofa, Moscow, 2004, 640 pp. (Russian)  zmath
2. A. L. Dontchev and R. T. Rockafellar, Implicit functions and solution mappings. A view from variational analysis, Springer Ser. Oper. Res. Financ. Eng., 2nd ed., Springer, New York, 2014, xxviii+466 pp.  crossref  mathscinet  zmath
3. R. G. Bartle and L. M. Graves, “Mappings between function spaces”, Trans. Amer. Math. Soc., 72:3 (1952), 400–413  crossref  mathscinet  zmath
4. V. M. Tikhomirov, “Lyusternik's theorem on a tangent space and some modifications of it”, Optimal control. Mathematics of production management, 7, Publishing House of Moscow State University, Moscow, 1977, 22–30 (Russian)
5. B. D. Gel'man, “A generalized implicit function theorem”, Funct. Anal. Appl., 35:3 (2001), 183–188  mathnet  crossref  mathscinet  zmath
6. B. H. Pourciau, “Analysis and optimization of Lipschitz continuous mappings”, J. Optim. Theory Appl., 22:3 (1977), 311–351  crossref  mathscinet  zmath
7. F. H. Clarke, Optimization and nonsmooth analysis, Canad. Math. Soc. Ser. Monogr. Adv. Texts, Wiley-Intersci. Publ., John Wiley & Sons, Inc., New York, 1983, xiii+308 pp.  mathscinet  zmath
8. B. S. Mordukhovich, Variational analysis and generalized differentiation, v. 1, Grundlehren Math. Wiss., 330, Basic theory, Springer-Verlag, Berlin, 2006, xxii+579 pp.  crossref  mathscinet  zmath
9. A. F. Izmailov and A. A. Tret'yakov, 2-regular solutions of nonlinear problems, Fizmatlit, Moscow, 1999, 336 pp. (Russian)  mathscinet  zmath
10. E. R. Avakov, “Theorems on estimates in the neighborhood of a singular point of a mapping”, Math. Notes, 47:5 (1990), 425–432  mathnet  crossref  mathscinet  zmath
11. A. V. Arutyunov, “Smooth abnormal problems in extremum theory and analysis”, Russian Math. Surveys, 67:3 (2012), 403–457  mathnet  crossref  mathscinet  zmath  adsnasa
12. A. F. Izmailov, “Theorems on the representation of nonlinear mapping families and implicit function theorems”, Math. Notes, 67:1–2 (2000), 45–54  mathnet  crossref  mathscinet  zmath
13. A. V. Arutyunov, “Existence of real solutions of nonlinear equations without a priori normality assumptions”, Math. Notes, 109:1 (2021), 3–14  mathnet  crossref  mathscinet  zmath
14. A. V. Arutyunov and S. E. Zhukovskiy, “Stability of real solutions to nonlinear equations and its applications”, Proc. Steklov Inst. Math., 323 (2023), 1–11  mathnet  crossref  mathscinet  zmath
15. G. H. Hardy, J. E. Littlewood and G. Pólya, Inequalities, Cambridge Univ. Press, Cambridge, 1934, xii+314 pp.  mathscinet  zmath
16. H. Cartan, Calcul différentiel, Hermann, Paris, 1967, 178 pp.  mathscinet  zmath; Formes différentielles. Applications élémentaires au calcul des variations et à la théorie des courbes et des surfaces, Hermann, Paris, 1967, 186 pp.  mathscinet  zmath

Citation: A. V. Arutyunov, S. E. Zhukovskiy, “Solvability of nonlinear degenerate equations and estimates for inverse functions”, Sb. Math., 216:1 (2025), 1–24
Citation in format AMSBIB
\Bibitem{AruZhu25}
\by A.~V.~Arutyunov, S.~E.~Zhukovskiy
\paper Solvability of nonlinear degenerate equations and estimates for inverse functions
\jour Sb. Math.
\yr 2025
\vol 216
\issue 1
\pages 1--24
\mathnet{http://mi.mathnet.ru/eng/sm10060}
\crossref{https://doi.org/10.4213/sm10060e}
\mathscinet{https://mathscinet.ams.org/mathscinet-getitem?mr=4882264}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2025SbMat.216....1A}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001454604100001}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-105001231278}
Linking options:
  • https://www.mathnet.ru/eng/sm10060
  • https://doi.org/10.4213/sm10060e
  • https://www.mathnet.ru/eng/sm/v216/i1/p3
  • This publication is cited in the following 2 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
    Statistics & downloads:
    Abstract page:673
    Russian version PDF:28
    English version PDF:47
    Russian version HTML:48
    English version HTML:119
    References:53
    First page:25
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2026