Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2025, Volume 216, Issue 8, Pages 1092–1120
DOI: https://doi.org/10.4213/sm10072e
(Mi sm10072)
 

Asymptotics of a solution to a terminal control problem with two small parameters

A. R. Danilin, O. O. Kovrizhnykh

N.N. Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences, Ekaterinburg, Russia
References:
Abstract: An optimal control problem is considered in the class of piecewise continuous controls with smooth geometric constraints on a fixed interval of time, for a linear autonomous system with two small positive independent parameters, one of which, $\varepsilon$, multiplies some derivatives in the equations of the system, while the other, $\mu$, is involved in the initial conditions. The quality functional is convex and terminal, and depends only on the values of the slow variables at the terminal time.
A limit relation as the small parameters tend independently to zero is verified for the vector describing the optimal control.
Two cases are considered: the regular case, when the optimal control in the limiting problem is continuous, and the singular case, when this control has a singularity.
In the regular case the solution is shown to expand in a power series in $\varepsilon$ and $\mu$, while in the singular case the solution is asymptotically represented by an Erdélyi series — in either case the asymptotics is with respect to the standard gauge sequence $\varepsilon^k+\mu^k$, as $\varepsilon +\mu \to0$.
Bibliography: 23 titles.
Keywords: optimal control, terminal convex quality functional, asymptotic expansion, independent small parameters.
Received: 23.01.2024 and 02.04.2024
Published: 17.10.2025
Bibliographic databases:
Document Type: Article
MSC: 49N05, 93C70
Language: English
Original paper language: Russian

§ 1. Introduction

We study an optimal control problem (see [1]–[3]) for an autonomous function system with slow and fast variables on a fixed interval of time in the class of piecewise continuous controls with smooth geometric constraints (see [4]–[7]). The quality functional is convex and terminal (see [8]), and depends only on the values of the slow variables at the terminal time. A particular feature of the problem is that the system contains two small positive parameters, one of which multiplies some derivatives in equations of the system and the other is involved in the initial conditions.

Usually, problems with several small parameters are considered when one parameter is subordinate to the other, and the parameter domain is partitioned into parts on which particular asymptotic expansions are written. Arlen Il’in’s suggestion was to find asymptotic expansions of solutions of various problems that apply to the whole range of the parameters. Thus, for one of these authors he stated the problem of finding an asymptotic formula for a linear system of two ordinary differential equations with two small parameters multiplying derivatives, which would apply equally well to all possible relations between these parameters. The corresponding results were published in [9].

In the early 1990s Il’in initiated the study of the time-optimal problem with control in a ball in Euclidean space and a small perturbation $\varepsilon$ of the initial data such that the limiting problem has a solution with discontinuous control and the control in the perturbed problem is continuous. It was shown in [4] and [5] that the asymptotic behaviour of the solution is described by an Erdélyi series in the power sequence $\varepsilon^k$, whose terms are rational expressions in terms of a certain special function and its logarithm and are not rational functions of $\varepsilon$ and $\log\varepsilon$.

Subsequently, these authors joined forces to obtain full asymptotic expansions of solutions of problems in control theory, including, in particular, ones with two independent small parameters.

At a conference in Chelyabinsk, six months before Il’in’s 80th birthday, Nazaikinskii (now a corresponding member of the Russian Academy of Sciences), who was the chairman of the section, asked us the following question concerning our talk: what can be said about the asymptotic behaviour of the solution of the time-optimal problem if the control system does not satisfy Vasil’eva’s standard condition (that the matrix at the fast variables has eigenvalues with negative real parts). Il’in also expressed interest in knowing the answer. Our advance in this direction was a paper treating the control of a point of small mass in a medium without resistance. There, generically, the solution was expanded into an asymptotic Erdélyi series in powers $\varepsilon^{k/2}$, whose terms have an involved form, similar to the first papers [4] and [5]. After we had reported on these results at the seminar of the Department of Equations of Mathematical Physics of the Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences, on February 7, 2013 Il’in presented our note [10] for publishing in the journal Doklady Akademii Nauk;1 this was one of the last notes presented by Academician Il’yin for publishing in Doklady Akademii Nauk.

The present paper combines the two areas of research of the authors, namely, involved asymptotic expressions with respect to two independent small parameters.

§ 2. Statement of the problem and defining relations

Consider the indirect control problem for a linear autonomous system

$$ \begin{equation} \begin{cases} \dot{x}_{\varepsilon,\mu}=Ax_{\varepsilon,\mu}+By_{\varepsilon,\mu}, \qquad t\in[0,T], \\ \varepsilon\dot{y}_{\varepsilon,\mu}=-y_{\varepsilon,\mu}+u_{\varepsilon,\mu}, \qquad \|u_{\varepsilon,\mu}\|\leqslant 1, \\ x_{\varepsilon,\mu}(0)=x^{0}+\mu\xi_1, \qquad y_{\varepsilon,\mu}(0)=y^0+\mu\xi_2, \quad 0<\varepsilon\ll1, \quad 0<\mu\ll1, \end{cases} \end{equation} \tag{2.1} $$
with slow and fast variables and the performance criterion
$$ \begin{equation} J_{\varepsilon,\mu}(u):=\frac12\|x_{\varepsilon,\mu} (T;u)\|^2=:\varphi(x_{\varepsilon,\mu}(T;u)), \end{equation} \tag{2.2} $$
where $x_{\varepsilon,\mu}\in\mathbb{R}^{\mathrm{k}}$, $y_{\varepsilon,\mu}, u_{\varepsilon,\mu}\in\mathbb{R}^\mathrm{m}$; $A$ and $B$ are constant real matrices of an appropriate size; $x_{\varepsilon,\mu}(\,{\cdot}\,;u)$ is the component of the solution of system (2.1) for fixed $u$.

Here and in what follows $\|\cdot\|$ denotes the Euclidean norm on the space under consideration, and $\langle\,\cdot\,{,}\,\cdot\,\rangle$ is the inner product. We use the superscript ‘$\top$’ to denote the transpose of a matrix; for example, $B^\top$ is the transpose of $B$.

We write the control system from (2.1) as

$$ \begin{equation} \begin{cases} \dot{z}_{\varepsilon,\mu}=\mathcal{A}_\varepsilon z_{\varepsilon,\mu}+ \mathcal{B}_\varepsilon u_{\varepsilon,\mu}, \\ z_{\varepsilon,\mu}(0)=z_{\mu}^{0}, \end{cases} \end{equation} \tag{2.3} $$
where
$$ \begin{equation} z_{\varepsilon,\mu}(t)= \begin{pmatrix} x_{\varepsilon,\mu}(t) \\ y_{\varepsilon,\mu}(t) \end{pmatrix}, \qquad z_{\mu}^0=\begin{pmatrix} x^0+\mu\xi_1 \\ y^0+\mu\xi_2 \end{pmatrix}, \end{equation} \tag{2.4} $$
$$ \begin{equation} \mathcal{A}_{\varepsilon}= \begin{pmatrix} A& B \\ 0 & \varepsilon^{-1} I \end{pmatrix}, \qquad \mathcal{B}_{\varepsilon}=\begin{pmatrix} 0\\ \varepsilon^{-1} I \end{pmatrix}, \end{equation} \tag{2.5} $$
and $I$ is the matrix of the identity mapping on the corresponding finite-dimensional space.

The problem obtained from (2.1), (2.2) for $\varepsilon=0$ and $\mu=0$ is as follows:

$$ \begin{equation} \begin{gathered} \, \dot{x_0}=A x_0+B u_0, \qquad t\in[0,T], \qquad \|u_0\|\leqslant 1, \qquad x_0(0)=x^0, \\ J_{0,0}(u_0):= \frac12\|x_0(T;u_0)\|^2 \to \min. \end{gathered} \end{equation} \tag{2.6} $$

Assumption 1. The system in (2.6) is completely controllable; in view of Kalman’s criterion (see [3], § 2.3, Theorem 5); this is equivalent to the condition

$$ \begin{equation*} \operatorname{rank}(B,AB,\dots,A^{\mathrm{k}-1}B)=\mathrm{k}. \end{equation*} \notag $$

Assumption 2. The system in (2.3) is completely controllable for each sufficiently small $\varepsilon>0$.

According to Theorem 1 in [11], Assumption 1 for system (2.1) is sufficient for Assumption 2 to hold.

Along with problems (2.3), (2.2) and (2.6), we consider the general problem

$$ \begin{equation} \begin{gathered} \, \displaystyle \dot{z}=\mathcal{A} z+\mathcal{B} u, \quad t\in[0,T], \qquad z(0)=z^{0}\in \mathbb{R}^\mathrm{n}, \quad\mathrm{n}>1, \qquad u(t)\in \mathbb{R}^\mathrm{m}, \quad\|u\|\leqslant 1, \\ J(u):=\sigma(z(T;u)) \to \min, \end{gathered} \end{equation} \tag{2.7} $$
$$ \begin{equation} \sigma\colon \mathbb{R}^{\mathrm{n}} \to \mathbb{R} \text{ is a convex continuously differentiable function on } \mathbb{R}^{\mathrm{n}} . \end{equation} \tag{2.8} $$

Let the following assumption hold.

Assumption 3. The system in (2.7) is completely controllable, that is,

$$ \begin{equation*} \operatorname{rank}(\mathcal{B},\mathcal{A}\mathcal{B},\dots, \mathcal{A}^{\mathrm{n}-1}\mathcal{B})=\mathrm{n}. \end{equation*} \notag $$

Let $\Xi(z^0,T)\subseteq\mathbb{R}^{\mathrm{n}}$ be the attainable set at time $T$ of the system in (2.7). According to [12], § 3, under Assumption 3 problem (2.7) is normal, that is, any two controls taking the initial point $z^0$ to the same boundary point of $\Xi(z^0,T)$ coincide almost everywhere on $[0,T]$. Hence $\Xi(z^0,T)$ is a strictly convex compact set (see, for example, [3], § 2.2, Theorem 3). Therefore, by (2.8) there exists a point $z_{\min}\in \Xi(z^0,T)$ such that

$$ \begin{equation*} \sigma_{\min}:= \sigma(z_{\min})=\min_{z\in \Xi(z^0,T)}\sigma(z), \qquad z_{\min}\in\operatorname{Arg} \min_{z\in \Xi(z^0,T)}\sigma(z). \end{equation*} \notag $$
Consequently, problem (2.7) is solvable.

The Pontryagin maximum principle for problem (2.7), (2.8) (see, for example, [8], § 6.1.3) gives a necessary condition for optimality: if $u^{\mathrm{opt}}$ is an optimal control in problem (2.7), then there exists a vector $L\in\mathbb{R}^{\mathrm{n}}$ (the value of the conjugate variable of the maximum principle at $T$) such that the solution $z(\,{\cdot}\,;u^{\mathrm{opt}})$ of the problem

$$ \begin{equation} \begin{cases} \dot{z}=\mathcal{A}z+\mathcal{B} u^{\mathrm{opt}}, \\ z(0)=z^0, \end{cases} \end{equation} \tag{2.9} $$
satisfies the boundary-value condition
$$ \begin{equation} L=-\nabla \sigma(z(T;u^{\mathrm{opt}})), \end{equation} \tag{2.10} $$
and
$$ \begin{equation} \langle C^\top (T-t)L, u^{\mathrm{opt}}(t)\rangle =\max_{\|u\|\leqslant 1} \langle C^\top (T-t)L, u\rangle=\| C^\top (T-t)L\|, \end{equation} \tag{2.11} $$
where $C^\top (t) := \mathcal{B}^\top e^{t\mathcal{A}^\top }$.

Let Assumption 3 be met and $L\not=0$ be an arbitrary vector in $\mathbb{R}^\mathrm{n}$. Then ${C^\top (T-t)L}$ has only a finite number of zeros on $[0,T]$. Consider the control $u(\,{\cdot}\,;L)$ defined by

$$ \begin{equation} u(t;L):=\frac{C^\top (T-t)L} {\|C^\top (T-t)L\|}. \end{equation} \tag{2.12} $$
Then $u(\,{\cdot}\,;L)$ satisfies
$$ \begin{equation*} \langle C^\top (T-t)L, u(t;L)\rangle =\max_{\|u\|\leqslant 1} \langle C^\top (T-t)L, u\rangle=\| C^\top (T-t)L\|. \end{equation*} \notag $$

Theorem 1. Under Assumption 3, suppose that there exist a nonzero vector ${L\,{\in}\,\mathbb{R}^\mathrm{n}}$ satisfying

$$ \begin{equation} L=-\nabla \sigma(z(T;u(\,{\cdot}\,;L))), \end{equation} \tag{2.13} $$
where the control $u(t;L)$ is defined by (2.12), and $z(\,{\cdot}\,;u)$ is the solution of problem (2.9) for the given control $u(\,{\cdot}\,)$. Then $L$ is a unique solution of equation (2.13), and $u(\,{\cdot}\,;L)$ is a unique optimal control $u^{\mathrm{opt}}$ in problem (2.7), (2.8).

Note that the unique $L$ and $u(\,{\cdot}\,;L)$ obtained from (2.13), (2.12) satisfy the Pontryagin maximum principle (2.9)(2.11).

Remark 1. Equation (2.13) has a nonzero solution if and only if

$$ \begin{equation} \operatorname{Arg} \min_{z\in \mathbb{R}^\mathrm{n}}\sigma(z)\cap\Xi(z^0,T) =\varnothing. \end{equation} \tag{2.14} $$

Remark 2. If condition (2.14) is not met and

$$ \begin{equation*} z_{\mathrm{glmin}}\in \operatorname{Arg} \min_{z\in \mathbb{R}^\mathrm{n}}\sigma(z)\cap\mathrm{int}\Xi(z^0,T), \end{equation*} \notag $$
then any control leading to the point $z_{\mathrm{glmin}}$ is optimal but not extremal, and (2.9)(2.11) holds for $L=0$ (the degenerate case).

Definition 1. A nonzero vector $L$ satisfying (2.13) is called a defining vector, because by Theorem 1 it defines the unique optimal control in problem (2.7), (2.8) by formula (2.12).

Returning to the original problem (2.1), (2.2), we note that $\sigma(z)\equiv\varphi(x)$. Hence

$$ \begin{equation*} \nabla\sigma(z_{\varepsilon,\mu}(T;u))= \begin{pmatrix} \nabla_x\sigma(z_{\varepsilon,\mu}(T;u)) \\ \nabla_y\sigma(z_{\varepsilon,\mu}(T;u)) \end{pmatrix} = \begin{pmatrix} \nabla\varphi(x_{\varepsilon,\mu}(T;u)) \\ 0 \end{pmatrix}. \end{equation*} \notag $$

By (2.10) we have

$$ \begin{equation} L=\begin{pmatrix}\breve{l}_{\varepsilon,\mu}\\0\end{pmatrix}, \end{equation} \tag{2.15} $$
where $\breve{l}_{\varepsilon,\mu}\in\mathbb{R}^\mathrm{k}$. Let Assumption 2 be fulfilled, and let
$$ \begin{equation} \breve{l}_{\varepsilon,\mu}\neq0. \end{equation} \tag{2.16} $$
Then by Theorem 1, $\breve{l}_{\varepsilon,\mu}$ is the unique solution of the main equation
$$ \begin{equation} \breve{l}_{\varepsilon,\mu}= -\nabla \varphi(x_{\varepsilon,\mu}(T;u(\,{\cdot}\,;(\breve{l}_{\varepsilon,\mu},0)^\top ))), \end{equation} \tag{2.17} $$
and by (2.12) and (2.15) the vector $\breve{l}_{\varepsilon,\mu}$ defines the unique optimal control in problem (2.1), (2.2) by the formula
$$ \begin{equation} u^{\mathrm{opt}}_{\varepsilon,\mu}(t)=u(\,{\cdot}\,;(\breve{l}_{\varepsilon,\mu},0)^\top )=: u(\,{\cdot}\,;\breve{l}_{\varepsilon,\mu})= \frac{\mathcal{C}_{1,\varepsilon}^\top (T-t)\breve{l}_{\varepsilon,\mu}} {\|\mathcal{C}_{1,\varepsilon}^\top (T-t)\breve{l}_{\varepsilon,\mu}\|}, \end{equation} \tag{2.18} $$
where $\mathcal{C}_{1,\varepsilon}(t):= [e^{\mathcal{A}_\varepsilon t}\mathcal{B}_\varepsilon]_1$, and $[\,{\cdot}\,]_1$ denotes the first $\mathrm{k}$ rows of this matrix.

In this setting $\breve{l}_{\varepsilon,\mu}$ can naturally be called the defining vector in problem (2.1), (2.2), because it is the only vector which satisfies equation (2.17) and uniquely defines the optimal control by (2.18).

Similarly, under Assumption 1 the vector $l_0\in\mathbb{R}^\mathrm{k}$,

$$ \begin{equation} l_0\neq0, \end{equation} \tag{2.19} $$
which is the unique vector satisfying (2.13):
$$ \begin{equation} l_0=-\nabla \varphi(x_0(T;u_0(\,{\cdot}\,;l_0))), \end{equation} \tag{2.20} $$
defines a unique optimal control in problem (2.6) by the formula
$$ \begin{equation} u^{\mathrm{opt}}_0(t)=\frac{C_0^\top (T-t)l_0} {\|C_0^\top (T-t)l_0\|}, \qquad C_0(t) := e^{At}B. \end{equation} \tag{2.21} $$

A direct calculation with the use of (2.5) shows that

$$ \begin{equation*} e^{\mathcal{A}_\varepsilon t}= \begin{pmatrix} e^{A t} &\varepsilon(I+\varepsilon A)^{-1}(e^{A t}-e^{-t/\varepsilon}I)B \\ 0 & e^{-t/\varepsilon}I \end{pmatrix} \end{equation*} \notag $$
and
$$ \begin{equation*} e^{\mathcal{A}_\varepsilon t} \mathcal{B}_\varepsilon= \begin{pmatrix} (I+\varepsilon A)^{-1}(e^{A t}-e^{-t/\varepsilon}I)B \\ \varepsilon^{-1}e^{-t/\varepsilon}I \end{pmatrix}. \end{equation*} \notag $$

For the function $\varphi$ in (2.2) we have $\nabla \varphi(\zeta)=\zeta$, and so using Cauchy’s formula for a solution of the linear system of differential equations (2.1) (where we change $T-t$ for $t$ in the integral) we can write the main equation (2.17) for the vector $\breve{l}_{\varepsilon,\mu}$ as

$$ \begin{equation} \begin{aligned} \, \notag &-\breve{l}_{\varepsilon,\mu}=e^{AT} (x^0+\mu\xi_1)+ \varepsilon(I+\varepsilon A)^{-1}(e^{A T}-e^{-T/\varepsilon}I)B(y^0+\mu\xi_2) \\ &\qquad+\int_{0}^{T} (I+\varepsilon A)^{-1}(e^{A t}-e^{-t/\varepsilon}I)B \frac{B^\top (e^{A^\top t}-e^{-t/\varepsilon}I)(I+\varepsilon A^\top )^{-1} \breve{l}_{\varepsilon,\mu}} {\|B^\top (e^{A^\top t}-e^{-t/\varepsilon}I)(I+\varepsilon A^\top )^{-1} \breve{l}_{\varepsilon,\mu}\|}\,dt. \end{aligned} \end{equation} \tag{2.22} $$
Now (2.20) assumes the form
$$ \begin{equation} -l_0=e^{AT} x^0+J_0(l_0), \qquad J_0(l_0):=\int_{0}^{T}C_0(t) \frac{C_0^\top (t)l_0} {\|C_0^\top (t)l_0\|}\,dt. \end{equation} \tag{2.23} $$

From the defining vector $\breve{l}_{\varepsilon,\mu}$ we go over to the new vector

$$ \begin{equation} l_{\varepsilon,\mu} := (I+\varepsilon A^\top )^{-1}\breve{l}_{\varepsilon,\mu}, \end{equation} \tag{2.24} $$
which will also be referred to as a defining vector for short.

Now equation (2.22) assumes the form

$$ \begin{equation} \begin{aligned} \, \notag -(I+\varepsilon A)(I+\varepsilon A^\top )l_{\varepsilon,\mu} &= (I+\varepsilon A)e^{AT}(x^0+\mu\xi_1)+ \varepsilon(e^{A T}-e^{-T/\varepsilon}I)B(y^0+\mu\xi_2) \\ &\qquad +\int_{0}^{T} (e^{A t}-e^{-t/\varepsilon}I)B \frac{B^\top (e^{A^\top t}-e^{-t/\varepsilon}I)l_{\varepsilon,\mu}} {\|B^\top (e^{A^\top t}-e^{-t/\varepsilon}I)l_{\varepsilon,\mu}\|}\,dt. \end{aligned} \end{equation} \tag{2.25} $$

Our aim here is to find the asymptotic behaviour of the defining vector $l_{\varepsilon,\mu}$ as $\varepsilon+\mu\to 0$.

§ 3. The limit theorem

Let $\Xi_0(x^0,T)$ be the attainable set of the control system (2.6) from a point $x^0$ at time $T$. Following § 3.4.1 in [13], for system (2.3) consider the set

$$ \begin{equation} \mathcal{K}_0:=\{z\in\mathbb{R}^{\mathrm{k}+\mathrm{m}}\colon x\in\Xi_0(x^0,T),\,y\in\mathcal{R}\}, \end{equation} \tag{3.1} $$
where
$$ \begin{equation*} \mathcal{R}:=\int_{0}^{+\infty}e^{-s}\mathcal{V}\,ds=\mathcal{V}=B[0,1], \end{equation*} \notag $$
because the set of admissible controls $\mathcal{V}$ in problem (2.1), (2.2) is the unit ball $B[0,1]$ in $\mathbb{R}^\mathrm{m}$.

Let $\mathcal{K}_{\varepsilon,\mu}:=\Xi_\varepsilon(z_\mu^0,T)$ be the attainable set of the control system (2.3) from the initial state $z_\mu^0$ (see (2.5)) at time $T$. According to [3], § 2.2, Theorem 1, and [13], § 3.4.1, $\Xi_0(x^0,T)$, $\mathcal{K}_0$ and $\mathcal{K}_{\varepsilon,\mu}$ are compact sets. In Lemma 3 of [14] it was shown that

$$ \begin{equation*} \lim_{\varepsilon+\mu\to0}d_H(\mathcal{K}_{\varepsilon,\mu},\mathcal{K}_0)=0, \end{equation*} \notag $$
where $d_H$ is the Hausdorff metric (see [15]).

Conditions (2.16) and (2.19) are important for the terminal functional $\varphi$ under consideration (see (2.2)). This function is continuously differentiable and strictly convex, and $0\in\mathbb{R}^{\mathrm{k}}$ is the (unique) point of its global minimum. Hence condition (2.14) is equivalent to the condition

$$ \begin{equation} x_{\mathrm{glmin}}=0\not\in \Xi_0(x^0,T). \end{equation} \tag{3.2} $$

The following result is analogous to Proposition 2 in [16] for a problem with two small parameters.

Proposition 1. If condition (3.2) is met, then there exists $\Delta>0$ such that if $\varepsilon$ and $\mu$ satisfy $\varepsilon+\mu<\Delta$ and $z_{\varepsilon,\mu}=(x_{\varepsilon,\mu},y_{\varepsilon,\mu})^\top \in \mathcal{K}_{\varepsilon,\mu}$, then $x_{\varepsilon,\mu}\neq0$, that is, system (2.1) satisfies condition (2.14).

Theorem 2. Let Assumption 1 be fulfilled and (2.19) hold. Then

$$ \begin{equation} l_{\varepsilon,\mu}\longrightarrow l_0 \quad\textit{as } \varepsilon+\mu\to 0. \end{equation} \tag{3.3} $$

Note that

$$ \begin{equation*} \breve{l}_{\varepsilon,\mu} := (I+\varepsilon A^\top )l_{\varepsilon,\mu} \longrightarrow l_0 \quad\text{as } \varepsilon+\mu\to 0. \end{equation*} \notag $$

§ 4. Asymptotic expansions and power series

In the case of two small parameters $\varepsilon$ and $\mu$ there are two standard gauge sequences, $\{\psi_{n,1}(\varepsilon,\mu)\}=\{(\varepsilon+\mu)^n\}$ and $\{\psi_{n,2}(\varepsilon,\mu)\}=\{\varepsilon^n+\mu^n\}$ (that is, $\psi_{n+1,i}(\varepsilon,\mu)/\psi_{n,i}(\varepsilon,\mu)\to 0$ as $\varepsilon+\mu\to 0$).

For $a>0$ and $b>0$ we have

$$ \begin{equation*} \begin{gathered} \, \frac{a^i b^{n-i}}{a^n+b^n}=\biggl( \frac{a^n+b^{n}}{a^i b^{n-i}}\biggr)^{-1} =\biggl( \biggl(\frac{a}{b}\biggr)^{n-i}+\biggl(\frac{b}{a}\biggr)^{i}\biggr)^{-1}\leqslant 1, \\ \frac{a^i b^{n-i}}{(a+b)^n} = \biggl( \frac{a}{a+b}\biggr)^i+ \biggl( \frac{b}{a+b}\biggr)^{n-i}\leqslant 1, \end{gathered} \end{equation*} \notag $$
and so
$$ \begin{equation*} (a+b)^n \leqslant 2^n(a^n+b^n)\quad\text{and} \quad a^n+b^n \leqslant 2(a+b)^n. \end{equation*} \notag $$
Consequently, as $\varepsilon+\mu\to 0$,
$$ \begin{equation*} (\varepsilon+\mu)^n=O(\varepsilon^n+\mu^n)\quad\text{and} \quad \varepsilon^n+\mu^n=O((\varepsilon+\mu)^n), \end{equation*} \notag $$
that is, both standard gauge sequences give the same asymptotic estimates.

Note that in the general case (for $p$ small variables $v_1,\dots,v_p$) we also have asymptotic estimates as $|v_1|+\dots+|v_p| \to 0$:

$$ \begin{equation} \begin{gathered} \, (|v_1|+\dots+|v_p|)^n= O(|v_1|^n+\dots+|v_p|^n), \\ |v_1|^n+\dots+|v_p|^n= O((|v_1|+\dots+|v_p|)^n). \end{gathered} \end{equation} \tag{4.1} $$

Following Erdélyi’s definition (see [17], Definition 2.4), which applies also to multivariate asymptotic expressions, a series $\sum_{n=0}^{\infty}w_n(\varepsilon,\mu)$ is called an asymptotic expansion as $\varepsilon+\mu\to 0$ of a function $\mathcal{W}(\varepsilon,\mu)$ with respect to a gauge sequence $\psi_n(\varepsilon,\mu)$ if

$$ \begin{equation*} \forall\,N\in \mathbb{N} \qquad \mathcal{W}(\varepsilon,\mu) - \sum_{n=0}^Nw_n(\varepsilon,\mu)= o(\psi_N(\varepsilon,\mu)). \end{equation*} \notag $$

This can be expressed as

$$ \begin{equation*} \mathcal{W}(\varepsilon,\mu) \stackrel{\mathrm{as}}{=} \sum_{n=0}^{\infty}w_n(\varepsilon,\mu) \quad \text{with respect to the gauge sequence } \psi_n(\varepsilon,\mu). \end{equation*} \notag $$

The indication of the gauge sequence $\psi_n(\varepsilon,\mu)$ will be dropped if its form is clear from the context.

We will work with power series of many scalar variables $v_1,\dots,v_n$ with scalar or vector coefficients, that is, each term has the form $\beta v_1^{k_1}\cdots v_n^{k_n}$, $k_j\in\mathbb{N}\cup\{0\}$.

By $R(v_1^{\alpha_1},\dots,v_n^{\alpha_n};i)$ (possibly with indices) we denote power series of $v_1^{\alpha_1},\dots,v_n^{\alpha_n}$ with terms of degree $\geqslant i$, that is, if $\beta (v_1^{\alpha_1})^{k_1}\cdots(v_n^{\alpha_n})^{k_n}$ is a term of the series under consideration, then $\alpha_1 k_1+\dots+\alpha_n k_n\geqslant i$ (we assume that we know the full series at the time of consideration).

Remark 3. A representation of one of the arguments of a power series as a finite-dimensional vector is merely a shorthand for the set of its scalar coordinates in a fixed basis.

We will write power series in the form

$$ \begin{equation*} R(v;i):= R(v_1,\dots,v_n;i)= \sum_{p=i}^{\infty}P_p(v), \end{equation*} \notag $$
where $P_p(v):= P_p(v_1,\dots,v_n)$ are homogeneous polynomials of degree $p$ of $v=(v_1,\dots,v_n)$ with scalar or vector coefficients. In what follows the notation $P_{p,j}(v)$ is also used for various homogeneous polynomials of degree $p$ in the formula under consideration.

Remark 4. Using the concept of a ‘homogeneous polynomial of degree $n$’ of vector arguments (see, for example, [18], Ch. 1, § 6.1), we can define an analogue of a power series with arguments from an arbitrary vector space.

Remark 5. All power series considered in this paper are convergent if their arguments are sufficiently small, that is, they converge in a neighbourhood of the origin.

We note the following relations for power series, which hold for $i,j\geqslant 1$ and can be verified directly by definition:

$$ \begin{equation} \begin{gathered} \, R(R(w;i),v_2,\dots,v_k;j)= R(w,v_2,\dots,v_k;j), \ \\ R(\varepsilon v_1,\dots,\varepsilon v_k;j)= \varepsilon^j R(\varepsilon,v_1,\dots,v_k;j), \\ R(v,\|\rho\|^2;j)=R(v,\rho;j). \end{gathered} \end{equation} \tag{4.2} $$

Remark 6. In general, a power series of $\|\rho\|$ is not a power series of $\rho$.

When justifying asymptotic representations, it will be important to distinguish in a power series groups of independent small quantities and small quantities that eventually turn out to be functions of the former. In this connection we introduce the following notation: we denote by $ R(\omega\,|\,v;i)$ a power series with terms of the form $P_{p_1}(\omega)P_{p_2}(v)$, with the additional condition $p_1+p_2\geqslant i$ and $p_2\geqslant 1$, where $\omega=(\omega_1,\dots,\omega_{n_1})$ and $v=(v_1,\dots,v_{n_2})$.

For such series we have

$$ \begin{equation} \begin{gathered} \, R(\omega,v;i)=R(\omega;i)+R(\omega\,|\, v;i), \qquad R(\omega_j;i)=R(\omega;i), \\ R(\omega\,|\, P_n(\omega)+v;i)=R(\omega;i+n-1)+R(\omega\,|\, v;i), \\ \omega_j R(\omega;i)=R(\omega;i+1), \qquad \omega_j R(v;i)=R(\omega\,|\, v;i+1), \\ \omega_j R(\omega\,|\, v;i)=R(\omega\,|\, v;i+1), \qquad R(\omega_j\,|\, v;i)=R(\omega\,|\, v;i), \\ R(\omega_j\,|\, v_p;i)=R(\omega\,|\, v;i). \end{gathered} \end{equation} \tag{4.3} $$

The sum of a power series converging in a neighbourhood of zero is an analytic function there, and so by Taylor’s formula

$$ \begin{equation} \|R(v_1,\dots,v_n;i)\|=O(\|v\|^{i}), \qquad v\to 0, \end{equation} \tag{4.4} $$
where $v:=(v_1,\dots,v_n)$ and $\|v\|=|v_1|+\dots+|v_n|$.

In particular, as $\|\omega\|+\|v\|\to 0$,

$$ \begin{equation} \|R(\omega\,|\, v;1)\|=O(\|\omega\|+\|v\|)\quad\text{and} \quad \|R(\omega\,|\, v;2)\|=O(\|\omega\|\cdot\|v\|+\|v\|^2). \end{equation} \tag{4.5} $$

Remark 7. In view of (4.4) and (4.1) a power series converging in a neighbourhood of zero is an asymptotic expansion of its sum with respect to the standard gauge sequence.

In what follows, by $f_{i,j}$ or $f_i$ we denote constant vectors independent of $\varepsilon$ or $\mu$ (and known at the time of consideration).

§ 5. Asymptotic expansion of the defining vector in the regular case

We first consider the asymptotic behaviour of the defining vector $l_{\varepsilon,\mu}$ under the following condition:

$$ \begin{equation} \exists\, \gamma>0 \quad \forall\, t\in[0,T] \quad \|C^\top _0(t)l_0\| \geqslant \gamma. \end{equation} \tag{5.1} $$

In [19], in the case of one small parameter it was shown that the the defining vector has a power-law asymptotic behaviour. However, the approach of [19] must substantially be redesigned in the case of two small parameter. So, instead of searching the coefficients of an a priori given power series, we will obtain an asymptotic equality with respect to a small vector quantity, from which we will both deduce the form of the asymptotic expansion and justify this expansion.

Theorem 3. Let Assumption 1 be fulfilled and condition (5.1) hold. Then $r_{\varepsilon,\mu}=l_{\varepsilon,\mu} - l_0$ satisfies the asymptotic equality

$$ \begin{equation} r \stackrel{\mathrm{as}}{=} \varepsilon f_{1,0}+ \mu f_{0,1}+\varepsilon\mu f_{1,1}+R(\varepsilon,r;2) \end{equation} \tag{5.2} $$
and can be expanded, as $\varepsilon+\mu\to 0$, in a power series in $\varepsilon$ and $\mu$ with respect to the standard gauge sequence
$$ \begin{equation*} r_{\varepsilon,\mu} \stackrel{\mathrm{as}}{=} \sum_{n=1}^{\infty}r_n(\varepsilon,\mu), \qquad r_n(\varepsilon,\mu)=P_n(\varepsilon,\mu), \end{equation*} \notag $$
where $P_n(\varepsilon,\mu)$ is homogeneous polynomial of degree $n$ in $\varepsilon$ and $\mu$ with vector coefficients from $\mathbb{R}^\mathrm{k}$. In particular, $r_1=\varepsilon f_{1,0}+\mu f_{0,1}$.

§ 6. An example of problem (2.1), (2.2) with a singularity in the optimal control (a singular case)

Consider problem (2.1), (2.2) for $\mathrm{k}=2\mathrm{n}$, $\mathrm{m}=\mathrm{n}$

$$ \begin{equation} A=\begin{pmatrix} 0 & I\\ 0 & 0 \end{pmatrix}, \qquad B=\begin{pmatrix} 0 \\ I \end{pmatrix}, \qquad \xi_1=0, \qquad y^0=0\quad\text{and} \quad\xi_2=\xi \end{equation} \tag{6.1} $$
(that is, $x_{\varepsilon,\mu}\in\mathbb{R}^{2\mathrm{n}}$ and $y_{\varepsilon,\mu},u_{\varepsilon,\mu}\in\mathbb{R}^\mathrm{n}$). Here $I$ is the matrix of the identity map $\mathbb{R}^\mathrm{n}\to\mathbb{R}^\mathrm{n}$.

Note that, in view of the criterion, the pair of matrices $A,B$ in (6.1) satisfies Assumption 1, and the pair of matrices $\mathcal{A}_\varepsilon,\mathcal{B}_\varepsilon$ satisfies Assumption 2.

By (6.1),

$$ \begin{equation*} \begin{gathered} \, (I+\varepsilon A)(I+\varepsilon A^\top ) =\begin{pmatrix} I & \varepsilon I \\ 0 & I \end{pmatrix} \begin{pmatrix} I & 0\\ \varepsilon I & I \end{pmatrix} =\begin{pmatrix} (1+\varepsilon^2)I & \varepsilon I \\ \varepsilon I & I \end{pmatrix}, \\ e^{At}=\begin{pmatrix} I & t I\\ 0 & I \end{pmatrix}. \end{gathered} \end{equation*} \notag $$

Let

$$ \begin{equation*} \begin{gathered} \, l_{\varepsilon,\mu}= \begin{pmatrix}l_{\varepsilon,\mu,1}\\ l_{\varepsilon,\mu,2} \end{pmatrix}, \qquad l_0=\begin{pmatrix} l_{0,1}\\ l_{0,2} \end{pmatrix}, \quad x^0=\begin{pmatrix} x^0_1\\ x^0_2 \end{pmatrix}, \\ l_{\varepsilon,\mu,i},l_{0,i},x^0_i\in\mathbb{R}^\mathrm{n}, \qquad i=1,2. \end{gathered} \end{equation*} \notag $$
We have
$$ \begin{equation*} e^{A t}B-e^{-t/\varepsilon}B= \begin{pmatrix} tI\\ I\end{pmatrix} -\begin{pmatrix} 0\\ e^{-t/\varepsilon}I\end{pmatrix} =\begin{pmatrix} tI \\ (1-e^{-t/\varepsilon})I\end{pmatrix} \end{equation*} \notag $$
and
$$ \begin{equation*} B^\top (e^{A^\top t}-e^{-t/\varepsilon})l_{\varepsilon,\mu}= t l_{\varepsilon,\mu,1}+(1-e^{-t/\varepsilon})l_{\varepsilon,\mu,2}, \end{equation*} \notag $$
and now the main equation (2.25) assumes the form
$$ \begin{equation} \begin{aligned} \, \notag &\begin{pmatrix} -(1+\varepsilon^2)l_{\varepsilon,\mu,1}-\varepsilon l_{\varepsilon,\mu,2} \\ -\varepsilon l_{\varepsilon,\mu,1}- l_{\varepsilon,\mu,2}\end{pmatrix} = \begin{pmatrix} x^0_1+Tx^0_2+\varepsilon x^0_2\\ x^0_2\end{pmatrix} +\varepsilon\mu\begin{pmatrix} T\xi\\ (1-e^{-T/\varepsilon})\xi \end{pmatrix} \\ &\qquad\qquad + \int_{0}^{T} \begin{pmatrix} tI \\ (1-e^{-t/\varepsilon})I\end{pmatrix} \frac{t l_{\varepsilon,\mu,1}+(1-e^{-t/\varepsilon})l_{\varepsilon,\mu,2}} {\|t l_{\varepsilon,\mu,1}+(1-e^{-t/\varepsilon})l_{\varepsilon,\mu,2}\|}\,dt. \end{aligned} \end{equation} \tag{6.2} $$
Note that equation (6.2) has a unique nonzero solution $l_{\varepsilon,\mu}$.

In turn, equation (2.23) has the form

$$ \begin{equation} -\begin{pmatrix}l_{0,1}\\l_{0,2} \end{pmatrix}= \begin{pmatrix} x^0_1+Tx^0_2\\ x^0_2\end{pmatrix} +\int_{0}^{T} \begin{pmatrix} tI \\ I\end{pmatrix} \frac{t l_{0,1}+l_{0,2}}{\|t l_{0,1}+l_{0,2}\|}\,dt. \end{equation} \tag{6.3} $$

We claim that we can choose an initial vector $x^0$ so that the denominator in the expression (2.21) for the optimal control in the limiting problem has a unique zero of the first order at the initial point $t=0$; this is equivalent to the relations

$$ \begin{equation} B^\top l_0=0, \qquad B^\top A^\top l_0\neq0, \qquad B^\top e^{A^\top t}l_0\neq0 \quad \forall\, t\in(0,T]. \end{equation} \tag{6.4} $$

For the matrices in (6.1) conditions (6.4) are equivalent to

$$ \begin{equation} l_{0,2}=0\quad\text{and} \quad l_{0,1}\neq0. \end{equation} \tag{6.5} $$

Let

$$ \begin{equation} l_{0,1}=\mathbf{e}, \qquad \|\mathbf{e}\|=1. \end{equation} \tag{6.6} $$
Then (6.3) assumes the form
$$ \begin{equation*} -\begin{pmatrix}l_{0,1}\\0 \end{pmatrix} = \begin{pmatrix} x^0_1+Tx^0_2\\ x^0_2\end{pmatrix} +l_{0,1}\int_{0}^{T} \begin{pmatrix} tI \\ I\end{pmatrix}\,dt. \end{equation*} \notag $$
Hence
$$ \begin{equation} x^0_1=\biggl(\frac{T^2}{2}-1\biggr)l_{0,1}, \qquad x^0_2=-T l_{0,1} \end{equation} \tag{6.7} $$
and the vectors $x^0_1$ and $x^0_2$ are collinear with $l_{0,1}$, so $x^0_1\parallel x^0_2$.

Let

$$ \begin{equation} T=2. \end{equation} \tag{6.8} $$
Now from (6.6) and (6.7) we obtain
$$ \begin{equation} x_1^0=\mathbf{e}\quad\text{and} \quad x_2^0=-2\mathbf{e}. \end{equation} \tag{6.9} $$

By (3.3), (6.5) and (6.6),

$$ \begin{equation} l_{\varepsilon,\mu,1}\longrightarrow \mathbf{e}\quad\text{and}\quad l_{\varepsilon,\mu,2}\longrightarrow0 \quad\text{as } \varepsilon+\mu\to0. \end{equation} \tag{6.10} $$

Assumption 4. Assume that $\|\xi\|=1$ and $\xi\nparallel \mathbf{e}$.

Theorem 4. Let Assumption 4 be fulfilled and (6.8) and (6.9) hold. Then, as $\varepsilon+\mu\to 0$, the vector $l_{\varepsilon,\mu}$ expands in an asymptotic series with respect to the standard gauge sequence $\{\varepsilon^k+\mu^k\}$. This is a power series in the small vector

$$ \begin{equation*} \omega_{\varepsilon,\mu}^\top := \biggl(\varepsilon,\mu,\varepsilon\log\varepsilon,\frac{\varepsilon}{W(\varepsilon)}, \frac{\mu}{W(\varepsilon)}\biggr)\colon\quad l_{\varepsilon,\mu} \stackrel{\mathrm{as}}{=} \begin{pmatrix} \mathbf{e}+2\varepsilon\mathbf{e}\\ 0\end{pmatrix} +\sum_{k=2}^{\infty}P_k(\omega_{\varepsilon,\mu}), \end{equation*} \notag $$
where $W(\varepsilon)=K+3\log\varepsilon$ for some known constant $K$ and the $P_k(\omega_{\varepsilon,\mu})$ are homogeneous polynomials of degree $k$ in the components of vector $\omega_{\varepsilon,\mu}$ with vector coefficients from $\mathbb{R}^{2\mathrm{n}}$.

In particular, the asymptotic representation of the vectors $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ up to $o(\varepsilon^2+\mu^2)$ has the form

$$ \begin{equation*} \begin{aligned} \, l_{\varepsilon,\mu,1} &=\mathbf{e}+2\varepsilon\mathbf{e}+ \varepsilon \biggl( \varepsilon f_2+\frac{\varepsilon}{W(\varepsilon)}f_3 -\frac23\mu(\xi+2\alpha_\xi\mathbf{e})+ \frac23\frac{\mu}{W(\varepsilon)}(\xi+7\alpha_\xi\mathbf{e})\biggr) \\ &\qquad+o(\varepsilon^2+\mu^2), \\ l_{\varepsilon,\mu,2} &= \varepsilon \biggl( \varepsilon f_4 - \mu\alpha_\xi\mathbf{e}+\frac{\varepsilon}{W(\varepsilon)}f_5 - \frac{\mu}{W(\varepsilon)}(\xi+7\alpha_\xi\mathbf{e}) \biggr)+ o(\varepsilon^2+\mu^2) \end{aligned} \end{equation*} \notag $$
as $\varepsilon+\mu\to 0$, where $\alpha_\xi:= \langle \xi,\mathbf{e} \rangle$.

From (2.24) and (6.1) we obtain

$$ \begin{equation*} \breve{l}_{\varepsilon,\mu,1}=l_{\varepsilon,\mu,1} \end{equation*} \notag $$
and
$$ \begin{equation*} \breve{l}_{\varepsilon,\mu,2}=\varepsilon\mathbf{e}+2\varepsilon^2\mathbf{e}+ \varepsilon \biggl( \varepsilon f_4 - \mu\alpha_\xi\mathbf{e}+\frac{\varepsilon}{W(\varepsilon)}f_5 +\frac{3\mu}{W(\varepsilon)}(3\alpha_\xi\mathbf{e}-4\xi)\biggr)+ o(\varepsilon^2+\mu^2). \end{equation*} \notag $$

The optimal control is unique, and therefore its asymptotic expansion can be obtained by substituting the above series into (2.18) and using (2.24) and (12.1).

§ 7. Proofs of the general theorems

7.1. Proof of Theorem 1

1. According to [3], § 2.2, Corollary 1, the control (2.12) is extremal, that is, $z_L:= z(T;u(\,{\cdot}\,;L))\in \partial \Xi(z^0,T)$ and $L$ is the outward normal vector to the support hyperplane to the set $\Xi(z^0,T)$ at $z_L$.

Set

$$ \begin{equation} \mathcal{M}_C :=\{z\in\mathbb{R}^{\mathrm{n}}\colon\sigma(z)\leqslant C\}, \qquad C > \inf_{z\in\mathbb{R}^{\mathrm{n}}} \sigma(z). \end{equation} \tag{7.1} $$

Note that if $\sigma(\widetilde{z})=C$, where $C$ satisfies the condition from (7.1), then $\mathrm{int}\mathcal{M}_C$ is a nonempty convex set. Let $\nabla\sigma(\widetilde{z})$ be the outward normal vector (relative to $\mathcal{M}_C$) to the (unique) support hyperplane to the set $\mathcal{M}_C$ at the point $\widetilde{z}$.

Consider an arbitrary nonzero vector $L$.

If $z_L\not=z_{\min}$, then $\sigma(z_L) > \sigma_{\min}$ and $\mathrm{int}\mathcal{M}_{\sigma(z_L)}\cap \Xi(T,z^0) \neq\varnothing$. Hence the (unique) support hyperplane to $\mathcal{M}_{\sigma(z_L)}$ at $z_L$ does not separate $\mathcal{M}_{\sigma(z_L)}$ from $\Xi(T,z^0)$. Hence the vectors $\nabla \sigma(z_L)$ and $L$ are oppositely directed, and, as a result, $L$ does not satisfy (2.13). So if $L\neq0$ satisfies (2.13), then $ z_L=z_{\min}$ and $\nabla \sigma(z_{\min})\neq 0$.

2. Consider now the point $z_{\min}$.

Since $\nabla \sigma(z_{\min})\not=0$, we have $\mathrm{int}\mathcal{M}_{\sigma_{\min}}\not=\varnothing$ and $\mathrm{int}\mathcal{M}_{\sigma_{\min}}\cap \Xi(T,z^0)=\varnothing$. Hence the separation theorem for convex sets applies. Here the unique hyperplane separating $\mathcal{M}_{\sigma_{\min}}$ from $\Xi(T,z^0)$ is orthogonal to $\nabla \sigma(z_{\min})$. Now, if a nonzero vector $L$ is directed oppositely to $\nabla\sigma(z_{\min})$, then $z(T;u(\,{\cdot}\,;L))\,{=}\,z_{\min}$.

If two nonzero vectors $L_1$ and $L_2$ are directed oppositely to $\nabla \sigma(z_{\min})$, then $L_1$ and $L_2$ have the same direction, that is, there exists $\lambda>0$ such that $L_1=\lambda L_2$. Hence $u(\,{\cdot}\,;L_1)=u(\,{\cdot}\,;L_2)$ by (2.12).

To complete the proof we note that, of the nonzero vectors directed oppositely to $\nabla\sigma(z_{\min})$ only one ($L=-\nabla \sigma(z_{\min})$) satisfies equation (2.13).

Theorem 1 is proved.

7.2. Proof of Proposition 1

Assume for a contradiction that for each $\Delta > 0$ there exist $\varepsilon_{\Delta}$ and $\mu_{\Delta}$ such that $\varepsilon_{\Delta}+\mu_{\Delta}\,{<}\,\Delta$ and $z_{\varepsilon_{\Delta},\mu_{\Delta}}\,{=}\, (0,y_{\varepsilon_{\Delta},\mu_{\Delta}})^\top \in \mathcal{K}_{\varepsilon_{\Delta},\mu_{\Delta}}$.

Taking $\Delta_n=1/n$, we find that $\varepsilon_n+\mu_n<1/n\to0$ as $n\to+\infty$ and $z_n:=(0,y_{\varepsilon_n,\mu_n})^\top\!\! \in\! \mathcal{K}_{\varepsilon_n,\mu_n}$. Note that the first $\mathrm{k}$ coordinates of any partial limit ${\overline{z}\!=\!(\overline{x},\overline{y})^\top}$ of $\{z_n\}$ are zero, that is, $\overline{x}=0$, and $\overline{z}\in\mathcal{K}_0$ (see [12], Assertion 3, for $\varepsilon$ replaced by ${\varepsilon+\mu}$). However, in view of (3.1) this contradicts condition (3.2).

This proves Proposition 1.

7.3. Proof of Theorem 2

First we note that by (2.16) and (2.24) the vector $l_{\varepsilon,\mu}$ is nonzero and bounded as $\varepsilon+\mu\to 0$. Let $\widetilde{l}_0$ be an arbitrary limit point of $ l_{\varepsilon,\mu}$, that is, there exist $\{\varepsilon_n\}$ and $\{\mu_n\}$ such that $\varepsilon_n+\mu_n \to 0$ and $\widetilde{l}_n:= l_{\varepsilon_n,\mu_n}\to \widetilde{l}_0$.

Set $\widehat{l}_n:= \widetilde{l}_n/\|\widetilde{l}_n\|$. Let $\widehat{l}_0$ be an arbitrary limit point of $\widehat{l}_n$. We can assume without loss of generality that $\widehat{l}_n\to \widehat{l}_0$.

After replacing $l_{\varepsilon,\mu}$ by $\widetilde{l}_n$ the terms outside the integral in (2.25) have limits as ${n\to+\infty}$, and so the integral has a finite limit. Therefore,

$$ \begin{equation} -\widetilde{l}_0=e^{AT}x^0+J_0, \end{equation} \tag{7.2} $$
where
$$ \begin{equation*} J_0=\lim_{n\to+\infty} J_n \end{equation*} \notag $$
and
$$ \begin{equation*} J_n:= \int_{0}^{T} (e^{A t}-e^{-t/\varepsilon}I)B \frac{B^\top (e^{A^\top t}-e^{-t/\varepsilon}I)\widetilde{l}_n} {\|B^\top (e^{A^\top t}-e^{-t/\varepsilon}I)\widetilde{l}_n\|}\,dt. \end{equation*} \notag $$

Let us find this limit. We split $J_n$ into the integrals over $[0,\sqrt{\varepsilon_n}]$ and $[\sqrt{\varepsilon_n}, T]$. The integrand is uniformly bounded, so

$$ \begin{equation} \begin{gathered} \, J_n=O(\sqrt{\varepsilon_n})+\mathbb{O}+ \int_{\sqrt{\varepsilon_n}}^{T} C_0(t) \frac{C_0^\top (t)\widetilde{l}_n} {\|C^\top _0(t)\widetilde{l}_n\|}\,dt= O(\sqrt{\varepsilon_n})+ \mathbb{O}+J_{n,0}(\widetilde{l}_n), \\ J_{n,0}(\widetilde{l}_n) := \int_{0}^{T} C_0(t) \frac{C_0^\top (t)\widetilde{l}_n} {\|C^\top _0(t)\widetilde{l}_n\|}\,dt. \end{gathered} \end{equation} \tag{7.3} $$
Next, the integrand in (7.3) is positively homogeneous in the vector $\widetilde{l}_n$, and so $ J_{n,0}(\widetilde{l}_n)=J_{n,0}(\widehat{l}_n)$. But according to Lemma 4 in [14],
$$ \begin{equation*} J_{n,0}(\widehat{l}_n) \to J_{0,0}(\widehat{l}_0)=\int_{0}^{T} C_0(t) \frac{C_0^\top (t)\widehat{l}_0} {\|C^\top _0(t)\widehat{l}_0\|}\,dt. \end{equation*} \notag $$

Now equality (7.2) assumes the form

$$ \begin{equation*} -\widetilde{l}_0=e^{AT}x^0+\int_{0}^{T} C_0(t) \frac{C_0^\top (t)\widehat{l}_0}{\|C^\top _0(t)\widehat{l}_0\|}\,dt \in \Xi_0(x_0,T). \end{equation*} \notag $$
Hence $\widetilde{l}_0\not=0$ in view of condition (2.19) and since $J_{0,0}(\widehat{l}_0)=J_{0,0}(\widetilde{l}_0) $. Thus, $\widetilde{l}_0$ satisfies equation (2.23), which is uniquely solvable by Theorem 1. Hence $\widetilde{l}_0=l_0$. The defining vector $l_{\varepsilon,\mu}$ has a unique limit point $l_0$ as $\varepsilon+\mu\to 0$, and so we eventually have $l_{\varepsilon,\mu}\longrightarrow l_0$ as $\varepsilon+\mu\to 0$.

Theorem 2 is proved.

§ 8. The regular case

8.1.

Consider the new quantity

$$ \begin{equation*} r:= l_{\varepsilon,\mu} - l_0, \end{equation*} \notag $$
which is small by Theorem 2. We find an asymptotic equality (with respect to $\varepsilon$, $\mu$ and $r$) generated by the main equation (2.25).

Note that the integrand in (2.25) has substantially different asymptotic expansions in $\varepsilon$ for large $t$ ($t\geqslant\varepsilon^q$, $q\in(0,1)$) and small $t\in(0,\varepsilon^q)$. So to find the asymptotic behaviour of the integral in (2.25) it is natural to use the auxiliary parameter method (see, for example, [20], [21], § 30.II, and [22]).

We split the integral in (2.25) into two:

$$ \begin{equation} J :=\int_{0}^{T}\frac{C_{\varepsilon}(t)C^\top _{\varepsilon}(t)(l_0+r)} {\|C^\top _{\varepsilon}(t)(l_0+r)\|}\,dt= \int_{0}^{\nu}+\int_{\nu}^{T}=: J_1+J_2, \end{equation} \tag{8.1} $$
where $C_{\varepsilon}(t):= (e^{At} - e^{-t/\varepsilon})B$ and $\nu=\varepsilon^p$, $p\in(1/2,2/3)$, is an auxiliary parameter.

Note that for $t\geqslant\nu\approx \varepsilon^\alpha$, $0<\alpha<1$, we have

$$ \begin{equation*} C_{\varepsilon}(t)=\mathbb{O}+C_0(t). \end{equation*} \notag $$

First consider the integral $J_2$. Recalling (5.1), we have

$$ \begin{equation*} \begin{aligned} \, &J_2+\mathbb{O}= \int_{\nu}^{T}\frac{C_0(t)C^\top _0(t)(l_0+r)} {\|C^\top _0(t)(l_0+r)\|}\,dt \\ &\quad=\int_{\nu}^{T}\!C_0(t)C^\top _0(t)(l_0+r) \bigl(\|C^\top _0(t)l_0\|^2+2\langle C^\top _0(t)l_0, C^\top _0(t)r\rangle+ \|C^\top _0(t)r\|^2\bigr)^{-1/2}\,dt \\ &\quad=\int_{\nu}^{T}\frac{C_0(t)C^\top _0(t)(l_0+r)}{\psi(t)} \biggl(1+\frac{2\langle C^\top _0(t)l_0, C^\top _0(t)r\rangle} {\psi(t)^2}+ \frac{\|C^\top _0(t)r\|^2}{\psi(t)^2}\biggr)^{-1/2}\,dt \\ &\quad=\int_{\nu}^{0}+\int_{0}^{T}= \mathcal{F}(\nu,r)+J_0+\mathcal{A} r+R(r;2), \end{aligned} \end{equation*} \notag $$
where
$$ \begin{equation*} \begin{gathered} \, \psi(t):=\|C^\top _0(t)l_0\|, \\ \mathcal{A} r=\int_{0}^{T}\biggl( \frac{C_0(t)C^\top _0(t)r}{\psi(t)} - \frac{\langle C^\top _0(t)r,C^\top _0(t)l_0\rangle C_0(t)C^\top _0(t)l_0}{\psi(t)^3} \biggr)\,dt, \qquad \mathcal{A}\geqslant 0, \end{gathered} \end{equation*} \notag $$
and $\mathcal{F}(\nu,r)$ is a series which will not be involved in the resulting asymptotic expression for $J$ by virtue of the auxiliary parameter method.

Thus, we have

$$ \begin{equation} J_2 \stackrel{\mathrm{as}}{=} J_0+\mathcal{A} r+R(r;2)+\mathcal{F}(\nu,r). \end{equation} \tag{8.2} $$

For $J_1$ we obtain

$$ \begin{equation} \begin{gathered} \, J_1=[t=\varepsilon\tau]=\varepsilon \int_{0}^{\nu/\varepsilon} \frac{E_{\varepsilon}(\tau)E^\top _{\varepsilon}(\tau)(l_0+r)} {\|E^\top _{\varepsilon}(\tau)(l_0+r)\|}\,d\tau, \\ E_{\varepsilon}(\tau):= (e^{A\varepsilon\tau} - e^{-\tau}I)B. \end{gathered} \end{equation} \tag{8.3} $$

Since $\varepsilon\tau\leqslant \nu$ is small, we can expand $e^{A\varepsilon\tau}$ as a Taylor series in powers of $\varepsilon\tau$. Now from (8.3) we obtain

$$ \begin{equation*} \begin{aligned} \, E_{\varepsilon}(\tau):= (e^{A\varepsilon\tau}-I+I - e^{-\tau}I)B &=\bigl((1 - e^{-\tau})I+ \varepsilon\tau R(\varepsilon\tau;0)\bigr)B \\ &= (1 - e^{-\tau}) \bigl(I+\varepsilon q(\tau) R(\varepsilon\tau;0)\bigr)B, \end{aligned} \end{equation*} \notag $$
where
$$ \begin{equation} q(\tau)=\frac{\tau}{1-e^{-\tau}}\stackrel{\mathrm{as}}{=} \begin{cases} 1+O(\tau), & \tau\to 0, \\ \tau, & \tau \to+\infty. \end{cases} \end{equation} \tag{8.4} $$

From (8.4) we obtain $\varepsilon q(\tau)=O(\nu)$ on $[0,\nu/\varepsilon]$, and therefore

$$ \begin{equation*} \begin{aligned} \, \|E^\top _{\varepsilon}(\tau)(l_0+r)\|^{-1} &=\frac{1}{1-e^{-\tau}}\|B^\top l_0+B^\top r+ \varepsilon q(\tau) R(\varepsilon\tau,r;0)\|^{-1} \\ &=\frac{1}{(1-e^{-\tau})\|B^\top l_0\|} (1+R(\varepsilon q(\tau),\varepsilon\tau,r;1)). \end{aligned} \end{equation*} \notag $$

Now by (8.4)

$$ \begin{equation*} \begin{aligned} \, \frac{E_{\varepsilon}(\tau)E^\top _{\varepsilon}(\tau)(l_0+r)} {\|E^\top _{\varepsilon}(\tau)(l_0+r)\|} &=\frac{1-e^{-\tau}}{\|B^\top l_0\|} (B^\top +R_1(\varepsilon q(\tau),\varepsilon\tau,r;1) ) (1+R(\varepsilon q(\tau),\varepsilon\tau,r;1)) \\ &= \frac{1-e^{-\tau}}{\|B^\top l_0\|} (B^\top l_0+R_2(\varepsilon q(\tau),\varepsilon\tau,r;1) ) \\ &= \frac{(1-e^{-\tau})B^\top l_0}{\|B^\top l_0\|}+ (1-e^{-\tau}) R_3(\varepsilon q(\tau),\varepsilon\tau,r;1). \end{aligned} \end{equation*} \notag $$

By (8.3) we have

$$ \begin{equation} J_1=\frac{B^\top l_0}{\|B^\top l_0\|} \int_{0}^{\nu/\varepsilon}\varepsilon(1-e^{-\tau})\,d\tau+ \int_{0}^{\nu/\varepsilon}\varepsilon(1-e^{-\tau}) R_3(\varepsilon q(\tau),\varepsilon\tau,r;1)\,d\tau. \end{equation} \tag{8.5} $$

The integral in the first term can be calculated explicitly. We have

$$ \begin{equation} \int_{0}^{\nu/\varepsilon}\varepsilon(1-e^{-\tau})\,d\tau= \nu-\varepsilon+\mathbb{O}. \end{equation} \tag{8.6} $$

Consider the second term in (8.5). The integration of the series under the integral sign involves the integration of the coefficients of its terms, which have the form

$$ \begin{equation*} \varepsilon(1-e^{-\tau})(\varepsilon q(\tau))^{p_1} (\varepsilon\tau)^{p_2}= \varepsilon^{1+p_1+p_2} (1-e^{-\tau}) q^{p_1}(\tau) \tau^{p_2}=:\varepsilon^{1+p_1+p_2} q_{p_1,p_2}(\tau) \end{equation*} \notag $$
for $p_1,p_2\geqslant 0$. Hence
$$ \begin{equation*} q_{p_1,p_2}(\tau)\stackrel{\mathrm{as}}{=} \begin{cases} O(\tau), & \tau\to 0, \\ \tau^{p_1+p_2}, & \tau\to+\infty, \end{cases} \quad\text{and so } q_{p_1,p_2}(\tau)-\tau^{p_1+p_2}\stackrel{\mathrm{as}}{=} \begin{cases} O(\tau), & \tau\to 0, \\ 0, & \tau\to+\infty. \end{cases} \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} \begin{aligned} \, \int_{0}^{\nu/\varepsilon}q_{p_1,p_2}(\tau)\,d\tau &=\int_{0}^{\nu/\varepsilon}(q_{p_1,p_2} (\tau)-\tau^{p_1+p_2})\,d\tau+ \int_{0}^{\nu/\varepsilon}\tau^{p_1+p_2}\,d\tau \\ &=\int_{0}^{+\infty}(q_{p_1,p_2}(\tau)-\tau^{p_1+p_2})\,d\tau+ \mathbb{O}+\frac{1}{p_1+p_2+1} \biggl(\frac{\nu}{\varepsilon}\biggr)^{p_1+p_2+1}, \end{aligned} \end{equation*} \notag $$
and now we have
$$ \begin{equation*} \varepsilon^{p_1+p_2+1}\int_{0}^{\nu/\varepsilon} q_{p_1,p_2}(\tau)\,d\tau \stackrel{\mathrm{as}}{=} \varepsilon^{p_1+p_2+1} D_{p_1,p_2}+ \nu^{p_1+p_2+1} \widetilde{D}_{p_1,p_2}. \end{equation*} \notag $$

Thus,

$$ \begin{equation} J_1 \stackrel{\mathrm{as}}{=} -\frac{\varepsilon B^\top l_0}{\|B^\top l_0\|}+ \varepsilon R(\varepsilon,r;1)+\mathcal{F}(\nu,r). \end{equation} \tag{8.7} $$

From (8.1), (8.2), (8.6) and (8.7) we obtain the required asymptotic formula for $J$:

$$ \begin{equation} J \stackrel{\mathrm{as}}{=} J_0+\mathcal{A} r - \frac{\varepsilon B^\top l_0}{\|B^\top l_0\|}+ R(r;2)+\varepsilon R(\varepsilon,r;1). \end{equation} \tag{8.8} $$
Substituting (8.8) into (2.25) we obtain
$$ \begin{equation*} -(I+\mathcal{A})r \stackrel{\mathrm{as}}{=} \varepsilon(A+A^\top )l_0+\varepsilon e^{AT}(Ax^0+By^0)+ \mu\xi_1 - \frac{\varepsilon B^\top l_0}{\|B^\top l_0\|}+ \varepsilon\mu e^{AT} B\xi_2+R(\varepsilon,r;2). \end{equation*} \notag $$

Since $I+\mathcal{A} > 0$, this operator is invertible, and, so, applying the operator $(I+\mathcal{A})^{-1}$ to the previous asymptotic equality we obtain the main asymptotic equality

$$ \begin{equation} r \stackrel{\mathrm{as}}{=} \varepsilon f_{1,0}+\mu f_{0,1}+ \varepsilon\mu f_{1,1}+R(\varepsilon,r;2). \end{equation} \tag{8.9} $$

Note that $r_{\varepsilon,\mu}=l_{\varepsilon,\mu} - l_0$ a fortiori satisfies the asymptotic equality (8.9).

8.2. Proof of Theorem 3

It will be convenient to rewrite (5.2) as

$$ \begin{equation} r \stackrel{\mathrm{as}}{=} \varepsilon f_{1,0}+\mu f_{0,1}+ \varepsilon\mu f_{1,1}+R(\varepsilon;2)+R(\varepsilon\,|\, r;2). \end{equation} \tag{8.10} $$

First we note that the series in (8.10) are convergent for small $\varepsilon$ and $r$, hence by (4.5)

$$ \begin{equation*} \|r_{\varepsilon,\mu}\|=O(\varepsilon+\mu+ \varepsilon\|r_{\varepsilon,\mu}\|+\|r_{\varepsilon,\mu}\|^2), \quad\text{that is, } \|r_{\varepsilon,\mu}\| \leqslant K (\varepsilon+\mu+ \varepsilon\|r_{\varepsilon,\mu}\|+ \|r_{\varepsilon,\mu}\|^2), \end{equation*} \notag $$
and therefore
$$ \begin{equation*} \|r_{\varepsilon,\mu}\|(1 - K\varepsilon - K\|r_{\varepsilon,\mu}\|) \leqslant K (\varepsilon+\mu). \end{equation*} \notag $$
We have $1 - K\varepsilon - K\|r_{\varepsilon,\mu}\|\to 0$ as $\varepsilon+\mu\to 0$, and so $\|r_{\varepsilon,\mu}\|=O(\varepsilon+\mu)$.

We set $r_1(\varepsilon,\mu):= \varepsilon f_{1,0}+\mu f_{0,1}$.

Now for $\widetilde{r}_{2}(\varepsilon,\mu):= r_{\varepsilon,\mu} - r_1(\varepsilon,\mu)$ we obtain the new equality

$$ \begin{equation*} \begin{aligned} \, \widetilde{r}_{2}(\varepsilon,\mu) &\stackrel{\mathrm{as}}{=} \varepsilon\mu f_{1,1}+R(\varepsilon;2)+ R(\varepsilon\,|\,\widetilde{r}_{2}(\varepsilon,\mu)+\varepsilon f_{1,0}+\mu f_{0,1};2) \\ &=P_2(\varepsilon,\mu)+R(\varepsilon,\mu;3)+ R(\varepsilon,\mu\,|\,\widetilde{r}_{2}(\varepsilon,\mu);2), \end{aligned} \end{equation*} \notag $$
and the new estimate
$$ \begin{equation*} \|\widetilde{r}_{2}(\varepsilon,\mu)\| \leqslant K(\varepsilon^2+\mu^2+ \varepsilon\|\widetilde{r}_{2}(\varepsilon,\mu)\|+\mu\|\widetilde{r}_{2}(\varepsilon,\mu)\|+ \|\widetilde{r}_{2}(\varepsilon,\mu)\|^2), \end{equation*} \notag $$
from which we have, as above,
$$ \begin{equation*} \|\widetilde{r}_{2}(\varepsilon,\mu)\|=O(\varepsilon^2+\mu^2 ). \end{equation*} \notag $$
Set $r_2(\varepsilon,\mu):= P_2(\varepsilon,\mu)$.

Next we use induction. Note that

$$ \begin{equation*} \widetilde{r}_{n}(\varepsilon,\mu)=P_n(\varepsilon,\mu)+R(\varepsilon,\mu;n+1)+ R(\varepsilon,\mu\,|\,\widetilde{r}_{n}(\varepsilon,\mu);2). \end{equation*} \notag $$

Theorem 3 is proved.

§ 9. Construction of an asymptotic expansion for the integral in (6.2)

Assumption 5. Let $l_{\varepsilon,\mu,2}=o(\varepsilon)$ as $\varepsilon+\mu\to0$.

We search the vector $l_{\varepsilon,\mu,2}$ in the form

$$ \begin{equation} l_{\varepsilon,\mu,2}=\lambda_{\varepsilon,\mu} l_{\varepsilon,\mu,1}+ r_{\varepsilon,\mu}, \quad\text{where } l_{\varepsilon,\mu,1}\perp r_{\varepsilon,\mu}. \end{equation} \tag{9.1} $$
By Assumption 5,
$$ \begin{equation} \lambda_{\varepsilon,\mu}=o(\varepsilon)\quad\text{and} \quad r_{\varepsilon,\mu}= o(\varepsilon) \quad\text{as } \varepsilon+\mu\to0. \end{equation} \tag{9.2} $$

Now the main equation (6.2) assumes the form

$$ \begin{equation} \begin{aligned} \, \notag & \begin{pmatrix} -(1+\varepsilon^2)l_{\varepsilon,\mu,1} - \varepsilon \lambda_{\varepsilon,\mu} l_{\varepsilon,\mu,1}-\varepsilon r_{\varepsilon,\mu} \\ -\varepsilon l_{\varepsilon,\mu,1} - \lambda_{\varepsilon,\mu} l_{\varepsilon,\mu,1} - r_{\varepsilon,\mu}\end{pmatrix} = \begin{pmatrix} - 3\mathbf{e} - 2\varepsilon \mathbf{e}\\ -2\mathbf{e}\end{pmatrix} \\ &\qquad\qquad +\varepsilon\begin{pmatrix}T\mu \xi\\\mu \xi+\mathbb{O} \end{pmatrix} +\displaystyle\int_{0}^{T} \begin{pmatrix} tI \\ (1-e^{-t/\varepsilon})I\end{pmatrix} u( t,\varepsilon,\mu)\,dt, \end{aligned} \end{equation} \tag{9.3} $$
where $\mathbb{O}$ is the asymptotic zero with respect to the power-law asymptotic sequence, that is, for each $\gamma>0$ we have $\mathbb{O}=o(\varepsilon^{\gamma})$ as $\varepsilon\to 0$, and where
$$ \begin{equation} u( t,\varepsilon,\mu)=\frac{(t+\lambda_{\varepsilon,\mu} (1-e^{-t/\varepsilon})) l_{\varepsilon,\mu,1}+ (1-e^{-t/\varepsilon})r_{\varepsilon,\mu}} {\|(t+\lambda_{\varepsilon,\mu}(1-e^{-t/\varepsilon})) l_{\varepsilon,\mu,1}+(1-e^{-t/\varepsilon})r_{\varepsilon,\mu}\|}. \end{equation} \tag{9.4} $$

Consider the new quantities

$$ \begin{equation} \widehat{l}:= l_{\varepsilon,\mu,1} - \mathbf{e}, \qquad \widehat{\lambda}:= \frac{\lambda_{\varepsilon,\mu}}{\varepsilon}\quad\text{and} \quad \widehat{r}:= \frac{r_{\varepsilon,\mu}}{\varepsilon}, \end{equation} \tag{9.5} $$
these quantities are small in view of (6.10), Assumption 5 and (9.2). We also consider the auxiliary quantities
$$ \begin{equation} \chi_{\varepsilon,\mu}:=\frac{l_{\varepsilon,\mu,1}}{\| l_{\varepsilon,\mu,1}\|}, \qquad\|\chi_{\varepsilon,\mu}\|=1, \qquad \widehat{\rho}:=\frac{\widehat{r}}{\| l_{\varepsilon,\mu,1}\|} \quad\text{and} \quad \widetilde{\rho}:=\| \widehat{\rho}\|, \qquad (\mathbf{e}+\widehat{l})\perp\widehat{\rho}, \end{equation} \tag{9.6} $$
defined in terms of $\widehat{l}$, $ \widehat{\lambda}$ and $ \widehat{r}$. In view of (9.5) we have
$$ \begin{equation} \begin{aligned} \, \notag \chi_{\varepsilon,\mu} &= \frac{\mathbf{e}+\widehat{l}}{\|\mathbf{e}+\widehat{l}\|} \stackrel{\mathrm{as}}{=} \mathbf{e}+\widehat{l} - \langle \mathbf{e},\widehat{l}\rangle\mathbf{e} - \langle \mathbf{e},\widehat{l}\rangle\widehat{l} - \frac{\|\widehat{l}\|^2}{2}\mathbf{e}+ \frac{3\mathbf{e}}{2}\langle \mathbf{e},\widehat{l}\rangle^2+ R(\widehat{l};3) \\ &=:\mathbf{e}+\widehat{\chi}, \qquad \widehat{r}=\widehat{\rho}+\langle \mathbf{e},\widehat{l}\rangle \widehat{\rho}+R(\widehat{l},\widehat{\rho};3), \qquad \langle \mathbf{e}+\widehat{l}, \widehat{\rho}\rangle=0. \end{aligned} \end{equation} \tag{9.7} $$

Let us find asymptotic equalities with respect to these quantities, which are generated by the main equation (6.2) and the additional constraints (9.1), following from (9.6) and (9.7).

Set

$$ \begin{equation} \begin{gathered} \, \mathcal{J}_1:=\int_{0}^{T}t u( t,\varepsilon,\mu)\,dt= \varepsilon^2\int_{0}^{2/\varepsilon}\tau u( \varepsilon\tau,\varepsilon,\mu)\,d\tau, \\ \mathcal{J}_2:=\int_{0}^{T} (1-e^{-t/\varepsilon}) u( t,\varepsilon,\mu)\,dt= \varepsilon\int_{0}^{2/\varepsilon} (1-e^{-\tau})u( \varepsilon\tau,\varepsilon,\mu)\,d\tau, \end{gathered} \end{equation} \tag{9.8} $$
where we have used (6.8) to change the integration variable in (9.3) to $t=\varepsilon\tau$.

By (9.4)(9.6) we have

$$ \begin{equation} \begin{aligned} \, \notag u(\varepsilon\tau,\varepsilon,\mu) &=\frac{\varepsilon\tau l_{\varepsilon,\mu,1}+ (1-e^{-\tau})(\varepsilon\widehat{\lambda}l_{\varepsilon,\mu,1}+ \varepsilon\widehat{r})} {\|l_{\varepsilon,\mu,1}\|\cdot\|\varepsilon\tau \chi_{\varepsilon,\mu}+ (1-e^{-\tau})(\varepsilon\widehat{\lambda}\chi_{\varepsilon,\mu}+ \varepsilon\widehat{\rho})\|} \\ \notag &=\frac{\mathbf{e}+\widehat{\chi}+ g(\tau)\widehat{\lambda}(\mathbf{e}+\widehat{\chi})+ g(\tau)\widehat{\rho}}{( (1+g(\tau)\widehat{\lambda})^2+ g^2(\tau)\widetilde{\rho}^2)^{1/2}} \\ \notag &=(\mathbf{e}+\widehat{\chi}+g(\tau)\widehat{\lambda}(\mathbf{e}+ \widehat{\chi})+g(\tau)\widehat{\rho}) (1 - g(\tau)\widehat{\lambda}+ R(g\widehat{\lambda},g\widehat{\rho};2)) \\ &=\mathbf{e}+\widehat{\chi}+g(\tau)\widehat{\rho}+ R(g\widehat{\lambda},g\widehat{\rho};2) + \widehat{\chi}R(g\widehat{\lambda},g\widehat{\rho};2), \end{aligned} \end{equation} \tag{9.9} $$
where
$$ \begin{equation} g(\tau):=\frac{1-e^{-\tau}}{\tau}\stackrel{\mathrm{as}}{=} \begin{cases} 1+O(\tau) & \text{as } \tau\to0, \\ \dfrac1\tau &\text{as } \tau\to+\infty. \end{cases} \end{equation} \tag{9.10} $$
Next, by (9.10) we have
$$ \begin{equation} \int_{0}^{2/\varepsilon}\tau g(\tau)\,d\tau = \int_{0}^{2/\varepsilon}(1-e^{-\tau})\,d\tau =-1+\frac{2}{\varepsilon}+\mathbb{O}, \end{equation} \tag{9.11} $$
$$ \begin{equation} \begin{split} \int_{0}^{2/\varepsilon}\tau g^2(\tau)\,d\tau &=\int_{0}^{1}\frac{(1-e^{-\tau})^2}{\tau}\,d\tau+ \int_{1}^{2/\varepsilon}\frac{(1-e^{-\tau})^2}{\tau}\,d\tau \\ &=\overline{D}+\int_{1}^{2/\varepsilon} \frac{1-2e^{-\tau}+e^{-2\tau}}{\tau}\,d\tau \\ &=\overline{D}+\int_{1}^{2/\varepsilon}\frac{d\tau}{\tau} -2\int_{1}^{2/\varepsilon}\frac{e^{-\tau}}{\tau}\,d\tau +\int_{1}^{2/\varepsilon} \frac{e^{-2\tau}}{\tau}\,d\tau \\ &=\overline{D}+\log\frac{2}{\varepsilon} -2\int_{1}^{+\infty}\frac{e^{-\tau}}{\tau}\,d\tau - 2\int_{+\infty}^{2/\varepsilon}\frac{e^{-\tau}}{\tau}\,d\tau \\ &\qquad+\int_{1}^{+\infty}\frac{e^{-2\tau}}{\tau}\,d\tau + \int_{+\infty}^{2/\varepsilon}\frac{e^{-2\tau}}{\tau}\,d\tau \\ &= D_{2,1} -\log\varepsilon+\log2+\mathbb{O}, \end{split} \end{equation} \tag{9.12} $$
$$ \begin{equation} \begin{split} \int_{0}^{2/\varepsilon}\tau g^k(\tau)\,d\tau &=\int_{0}^{+\infty}\tau g^k(\tau)\,d\tau+ \int_{+\infty}^{2/\varepsilon}\tau g^k(\tau)\,d\tau \\ &=D_{k,1} - \frac{\varepsilon^{k-2}}{(k-2)2^{k-2}}+\mathbb{O}, \qquad k\geqslant 3, \end{split} \end{equation} \tag{9.13} $$
$$ \begin{equation} \begin{gathered} \, \int_{0}^{2/\varepsilon} g(\tau)\,d\tau= D_{1,0} - \log\varepsilon+\log2+\mathbb{O}, \\ \int_{0}^{2/\varepsilon} g^k(\tau)\,d\tau= D_{k,0} - \frac{\varepsilon^{k-1}}{(k-1)2^{k-1}}+ \mathbb{O}, \qquad k\geqslant 2. \end{gathered} \end{equation} \tag{9.14} $$
Here
$$ \begin{equation} \overline{D}:=\int_{0}^{1}\frac{(1-e^{-\tau})^2}{\tau}\,d\tau\quad\text{and} \quad D_{2,1}:= \overline{D}-2\int_{1}^{+\infty}\frac{e^{-\tau}}{\tau}\,d\tau+ \int_{1}^{+\infty}\frac{e^{-2\tau}}{\tau}\,d\tau. \end{equation} \tag{9.15} $$
Therefore, changing the integration variable to $\tau=t/\varepsilon$ and recalling (9.8), (9.9) and (9.11)(9.14), we find that
$$ \begin{equation} \begin{aligned} \, \notag \mathcal{J}_1 &=\varepsilon^2\int_{0}^{2/\varepsilon}\tau \bigl(\mathbf{e}+\widehat{\chi}+g(\tau)\widehat{\rho}+ R(g\widehat{\lambda},g\widehat{\rho};2)+ \widehat{\chi}R(g\widehat{\lambda},g\widehat{\rho};2)\bigr)\,d\tau \\ \notag &=2\mathbf{e}+2\widehat{\chi}+ 2\varepsilon\widehat{\rho}-\varepsilon^2\widehat{\rho}+ \varepsilon^2\log\varepsilon (P_{2,0}(\widehat{\lambda},\widehat{\rho})+ \widehat{\chi}P_{2,1}(\widehat{\lambda},\widehat{\rho})) \\ &\qquad +\varepsilon^2 R(\varepsilon,\widehat{\lambda},\widehat{\rho};2)+ \varepsilon^2\widehat{\chi}R(\varepsilon,\widehat{\lambda},\widehat{\rho};2)+\mathbb{O}. \end{aligned} \end{equation} \tag{9.16} $$

Next, we have $(1-e^{-\tau})g^k(\tau)=\tau g^{k+1}(\tau)$, and so from (9.12) and (9.13) we obtain

$$ \begin{equation} \begin{aligned} \, \notag \mathcal{J}_2 &=\varepsilon\int_{0}^{\nu/\varepsilon}(1-e^{-\tau}) \bigl(\mathbf{e}+\widehat{\chi}+g(\tau)\widehat{\rho}+ R(g\widehat{\lambda},g\widehat{\rho};2)+\widehat{\chi}R(g\widehat{\lambda},g\widehat{\rho};2) \bigr)\,d\tau \\ \notag &=2\mathbf{e}+2\widehat{\chi} -\varepsilon\mathbf{e} - \varepsilon\widehat{\chi}+\varepsilon \widehat{\rho}(\log2+D_{2,1} -\log\varepsilon) \\ &\qquad +\varepsilon R(\varepsilon,\widehat{\lambda},\widehat{\rho};2)+ \varepsilon \widehat{\chi}R(\varepsilon,\widehat{\lambda},\widehat{\rho};2)+\mathbb{O}. \end{aligned} \end{equation} \tag{9.17} $$

§ 10. Construction of the main asymptotic system

Now, recalling (9.5), (9.7), (9.16) and (9.17), and expressing, in particular, $\widehat{r}$ in terms of $\widehat{\rho}$, we write the main system of equations (9.3) in the asymptotic form:

$$ \begin{equation} \begin{cases} -\widehat{l} - 2\widehat{\chi} - 2\varepsilon\widehat{\rho}= - 2\varepsilon\mathbf{e}+\varepsilon^2\mathbf{e}+2\varepsilon\mu\xi \\ \qquad +\,\varepsilon^2\log\varepsilon( P_{2,0}(\widehat{\lambda},\widehat{\rho})+ \widehat{\chi}P_{2,1}(\widehat{\lambda},\widehat{\rho}))+ \varepsilon^2\mathcal{R}_1(\varepsilon, \widehat{l}, \widehat{\lambda}, \widehat{\rho}, \widehat{\chi}), \\ -\varepsilon \widehat{l} - \varepsilon \widehat{\lambda}\mathbf{e} - 2\widehat{\chi} - \varepsilon\widehat{\rho}D_\varepsilon=\varepsilon\mu\xi -\varepsilon\widehat{\chi}+ \varepsilon\mathcal{R}_2(\varepsilon, \widehat{l}, \widehat{\lambda}, \widehat{\rho}, \widehat{\chi}), \\ \widehat{\chi}=\widehat{l}- \langle \mathbf{e},\widehat{l}\rangle\mathbf{e}+ R(\widehat{l};2), \\ \langle \mathbf{e}+\widehat{l},\widehat{\rho}\rangle=0, \end{cases} \end{equation} \tag{10.1} $$
where
$$ \begin{equation} \begin{gathered} \, \mathcal{R}_1(\varepsilon, \widehat{l}, \widehat{\lambda}, \widehat{\rho}, \widehat{\chi})= R(\varepsilon, \widehat{l},\widehat{\lambda},\widehat{\rho};1) +\widehat{\chi}R(\varepsilon,\widehat{\lambda},\widehat{\rho};2), \\ \mathcal{R}_2(\varepsilon, \widehat{l}, \widehat{\lambda}, \widehat{\rho}, \widehat{\chi})= R(\varepsilon, \widehat{l},\widehat{\lambda},\widehat{\rho};2)+ \widehat{\chi}R(\varepsilon,\widehat{\lambda},\widehat{\rho};2) \end{gathered} \end{equation} \tag{10.2} $$
and
$$ \begin{equation} D_\varepsilon:= 1+\log2+D_{2,1}-\log\varepsilon, \end{equation} \tag{10.3} $$
and the constant $D_{2,1}$ was defined in (9.15).

The first-order approximation system for (10.1) is as follows:

$$ \begin{equation*} \begin{cases} -\widehat{l}_1 - 2\widehat{\chi}_1=- 2\varepsilon\mathbf{e}, \\ 2\widehat{\chi}_1=0, \\ \widehat{\chi}_1=\widehat{l}_1- \langle \mathbf{e},\widehat{l}_1\rangle\mathbf{e}, \\ \langle \mathbf{e},\widehat{\rho}_1\rangle=0. \end{cases} \end{equation*} \notag $$
From this system we find that
$$ \begin{equation} \widehat{l}_1=2\varepsilon\mathbf{e}, \qquad \widehat{\chi}_1=0\quad\text{and} \quad \langle \mathbf{e},\widehat{\rho}_1\rangle=0. \end{equation} \tag{10.4} $$
Recalling (10.4), consider the new unknown small quantities $\widetilde{l}$ and $\widetilde{\chi}$ defined by
$$ \begin{equation} \widehat{l}=2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}\quad\text{and} \quad \widehat{\chi}=\varepsilon\widetilde{\chi}. \end{equation} \tag{10.5} $$
Substituting (10.5) into (10.1) and (10.2) and dividing by $\varepsilon$, we can write system (10.1), (10.2) as
$$ \begin{equation} \begin{cases} -\widetilde{l} - 2\widetilde{\chi} - 2\widehat{\rho}= \varepsilon\mathbf{e}+2\mu\xi + \varepsilon\log\varepsilon(P_{2,0}(\widehat{\lambda},\widehat{\rho})+ \varepsilon\widetilde{\chi}P_{2,1}(\widehat{\lambda},\widehat{\rho})) \\ \qquad +\,\varepsilon\mathcal{R}_1(\varepsilon,2\varepsilon\mathbf{e}+ \varepsilon\widetilde{l}, \widehat{\lambda}, \widehat{\rho}, \varepsilon\widetilde{\chi}), \\ - \widehat{\lambda}\mathbf{e} - \widehat{\rho}D_\varepsilon - 2\widetilde{\chi}= \mu\xi+\varepsilon f_1+\varepsilon\widetilde{l}-\varepsilon\widetilde{\chi}+ \mathcal{R}_2(\varepsilon,2\varepsilon\mathbf{e}+ \varepsilon\widetilde{l}, \widehat{\lambda}, \widehat{\rho}, \varepsilon\widetilde{\chi}), \\ \widetilde{\chi} - \widetilde{l}+ \langle \mathbf{e},\widetilde{l}\rangle\mathbf{e}= R(\varepsilon;1)+\varepsilon R_\varepsilon(\varepsilon,\widetilde{l};1), \\ \langle \mathbf{e},\widehat{\rho}\rangle= -\varepsilon (2 \langle \mathbf{e},\widehat{\rho}\rangle+ \langle \widetilde{l},\widehat{\rho}\rangle), \end{cases} \end{equation} \tag{10.6} $$
where the constant $f_1$ is now known.

Next, excluding the auxiliary unknown vector $\widetilde{\chi}$ from (10.6), substituting the expression for $\widetilde{\chi}$ from the third equation into the first and second and using equalities (4.2) and (4.3) we have

$$ \begin{equation} \begin{cases} -3\widetilde{l}+2\langle \mathbf{e},\widetilde{l}\rangle\mathbf{e} - 2\widehat{\rho}=P_{1,1}(\varepsilon,\mu)+R_1(\varepsilon;2)+ \varepsilon R_2(\varepsilon\,|\, v;1) \\ \qquad +\, \varepsilon\log\varepsilon R_3(\varepsilon, v;2), \\ -2\widetilde{l}+2\langle \mathbf{e},\widetilde{l}\rangle\mathbf{e} - \widehat{\lambda}\mathbf{e} - \widehat{\rho}D_\varepsilon= P_{1,2}(\varepsilon,\mu)+R_4(\varepsilon;2)+ \varepsilon R_5(\varepsilon\,|\, v;1), \\ \langle \mathbf{e},\widehat{\rho}\rangle= -\varepsilon (2 \langle \mathbf{e},\widehat{\rho}\rangle+ \langle \widetilde{l},\widehat{\rho}\rangle), \end{cases} \end{equation} \tag{10.7} $$
where $v=(\widetilde{l}, \widehat{\lambda}, \widehat{\rho})^\top $.

Now system (10.7) assumes the form

$$ \begin{equation} F_\varepsilon v=G(\varepsilon,\mu,v), \end{equation} \tag{10.8} $$
where the linear operator $F_\varepsilon$ depends only on $\log\varepsilon$.

We claim that this operator is invertible. To prove this we solve the equation $F_\varepsilon v=w$, that is,

$$ \begin{equation} \begin{cases} -3\widetilde{l}+2\langle \mathbf{e},\widetilde{l}\rangle\mathbf{e} - 2\widehat{\rho}=w_1, \\ -2\widetilde{l}+2\langle \mathbf{e},\widetilde{l}\rangle\mathbf{e} - \widehat{\lambda}\mathbf{e} - \widehat{\rho}D_\varepsilon=w_2, \\ \langle \mathbf{e},\widehat{\rho}\rangle=w_3. \end{cases} \end{equation} \tag{10.9} $$

Multiplying the second equation in (10.9) by $\mathbf{e}$ and using the third, this establishes

$$ \begin{equation} \widehat{\lambda}=\alpha_1:= - \langle w_2,\mathbf{e}\rangle - D_\varepsilon w_3. \end{equation} \tag{10.10} $$
Next, multiplying the first equation in (10.9) by $\mathbf{e}$ and employing the third equation, we get that
$$ \begin{equation} \langle \mathbf{e},\widetilde{l}\rangle= \alpha_2:= - \langle w_1,\mathbf{e}\rangle - 2 w_3. \end{equation} \tag{10.11} $$
Substituting these scalar quantities from (10.10) and (10.11) into the first and second equations of (10.9), we have
$$ \begin{equation*} - 3\widetilde{l} - 2\widehat{\rho}=w_1-2\alpha_2\mathbf{e}\quad\text{and}\quad -2\widetilde{l} - \widehat{\rho}D_\varepsilon=w_2+(\alpha_1-2\alpha_2)\mathbf{e}, \end{equation*} \notag $$
respectively. Hence
$$ \begin{equation} \widehat{\rho}=\alpha_3:= \frac{1}{4-3D_\varepsilon} (-2w_1+3w_2+(3\alpha_1+2\alpha_2)\mathbf{e}) \end{equation} \tag{10.12} $$
and
$$ \begin{equation} \widetilde{l}=\alpha_4 := - \frac{1}{3}(w_1-2\alpha_2\mathbf{e}+2\alpha_3). \end{equation} \tag{10.13} $$
Set
$$ \begin{equation} W(\varepsilon):= 4-3D_\varepsilon\longrightarrow -\infty \quad\text{as } \varepsilon\to+0. \end{equation} \tag{10.14} $$

Along with the vector of unknowns $v$, we consider the vectors

$$ \begin{equation} \begin{aligned} \, \omega &=\biggl(\varepsilon,\mu,\varepsilon\log\varepsilon,\frac{\varepsilon}{W(\varepsilon)}, \frac{\mu}{W(\varepsilon)}\biggr), \\ \omega_\varepsilon &= \biggl(\varepsilon,\varepsilon\log\varepsilon,\frac{\varepsilon}{W(\varepsilon)}\biggr) \end{aligned} \end{equation} \tag{10.15} $$
of known small quantities. Note that $\|\omega\|\to0$ and $\|\omega_\varepsilon\|\to0$ as $\varepsilon+\mu\to 0$.

Applying the operator $F^{-1}_\varepsilon$ to (10.8) and using (10.10), (10.12), (10.13) and (4.3) we obtain the system of equations

$$ \begin{equation} \begin{gathered} \, v=F_\varepsilon^{-1}G(\varepsilon,\mu,v)=:\overline{G}(\omega,v), \\ \overline{G}(\omega,v)\stackrel{\mathrm{as}}{=} P_1\biggl(\varepsilon, \frac{\varepsilon}{W(\varepsilon)},\mu, \frac{\mu}{W(\varepsilon)}\biggr)+ R(\omega;2)+R(\omega\,|\, v;2). \end{gathered} \end{equation} \tag{10.16} $$
Indeed, in view of (10.7), (10.11)(10.13) and (4.3) and since
$$ \begin{equation*} \frac{\varepsilon\log\varepsilon}{W(\varepsilon)}=\varepsilon D_3+\frac{\varepsilon D_4}{W(\varepsilon)}, \end{equation*} \notag $$
where $D_3$ and $D_4$ are some known constants, we have, for example, for $\widetilde{l}$,
$$ \begin{equation*} \begin{aligned} \, \widetilde{l} &=P_1(\varepsilon,\mu)+R(\varepsilon;2)+\varepsilon R(\varepsilon\,|\, v;1)+ \varepsilon\log\varepsilon R(\varepsilon, v;2)+\varepsilon R(\widetilde{l},\widehat{\rho};1) \\ &\qquad +\frac{1}{W(\varepsilon)}\bigl(P_1(\varepsilon,\mu)+R(\varepsilon;2)+\varepsilon R(\varepsilon\,|\, v;1)+ \varepsilon\log\varepsilon R(\varepsilon, v;2)+\varepsilon R(\widetilde{l},\widehat{\rho};1)\bigr) \\ &=P_1\biggl(\!\varepsilon, \frac{\varepsilon}{W(\varepsilon)},\mu, \frac{\mu}{W(\varepsilon)}\biggr)+ R(\varepsilon;2)+\varepsilon\log\varepsilon R(\varepsilon;2)\,{+}\, \varepsilon R(\varepsilon\,|\, v;1)\,{+}\,\varepsilon\log\varepsilon R(\varepsilon\,|\, v;2) \\ &\qquad +R(\varepsilon\,|\, \widetilde{l},\widehat{\rho};2)+\frac{\varepsilon}{W(\varepsilon)} (R(\varepsilon;1)+R(\varepsilon\,|\, v;1)+R(\widetilde{l},\widehat{\rho};1)) \\ &\qquad + \biggl(\varepsilon D_3+\frac{\varepsilon D_4}{W(\varepsilon)}\biggr)(R(\varepsilon;2)+ R(\varepsilon\,|\, v;2)) \\ &=P_1\biggl(\varepsilon, \frac{\varepsilon}{W(\varepsilon)},\mu, \frac{\mu}{W(\varepsilon)}\biggr)+ R\biggl(\varepsilon,\varepsilon\log\varepsilon,\frac{\varepsilon}{W(\varepsilon)};2\biggr)+ R(\omega\,|\, v;2) \\ &=P_1\biggl(\varepsilon, \frac{\varepsilon}{W(\varepsilon)},\mu, \frac{\mu}{W(\varepsilon)}\biggr)+ R(\omega;2)+R(\omega\,|\, v;2). \end{aligned} \end{equation*} \notag $$

Remark 8. If Assumption 5 holds, then the resulting asymptotic equality (10.16) holds for $\widetilde{l}, \widehat{\lambda}$ and $ \widehat{\rho}$, in terms of which $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ (the solutions of the original equation (6.2)) can be expressed via (9.1), (9.5) and (10.5). In this case an asymptotic formula for $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ can be obtained from the asymptotic equality (10.16). However, if Assumption 5 is not fulfilled, we cannot assert that the resulting $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ give an asymptotic expansion for the solution of the original equation. So in the next section we verify Assumption 5.

§ 11. Proof of Assumption 5

Recalling (9.1), (9.2), (9.5) and (10.5), consider the new unknown vectors ${\widetilde{l}=\widetilde{l}(\varepsilon,\mu)}$ and $\overline{l}=\overline{l}(\varepsilon,\mu)$ defined by

$$ \begin{equation} l_{\varepsilon,\mu,1}=\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}\quad\text{and} \quad l_{\varepsilon,\mu,2}=\varepsilon \overline{l}. \end{equation} \tag{11.1} $$

Now the main equation (6.2) is written in the new variables as

$$ \begin{equation} \begin{aligned} \, \notag & \begin{pmatrix} -(1+\varepsilon^2)(\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}) - \varepsilon^2 \overline{l} \\ -\varepsilon (\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}) -\varepsilon \overline{l} \end{pmatrix} = \begin{pmatrix} - 3\mathbf{e} - 2\varepsilon \mathbf{e} \\ -2\mathbf{e} \end{pmatrix} \\ &\qquad+\varepsilon\mu\begin{pmatrix} 2 \xi \\ (1-e^{-2/\varepsilon})\xi \end{pmatrix} +\begin{pmatrix} \overline{\mathcal{J}}_1(\varepsilon,\widetilde{l},\overline{l}) \\ \overline{\mathcal{J}}_2(\varepsilon,\widetilde{l},\overline{l}) \end{pmatrix}, \end{aligned} \end{equation} \tag{11.2} $$
where
$$ \begin{equation} \begin{pmatrix} \overline{\mathcal{J}}_1(\varepsilon,\widetilde{l},\overline{l}) \\ \overline{\mathcal{J}}_2(\varepsilon,\widetilde{l},\overline{l}) \end{pmatrix} = \int_{0}^{T} \begin{pmatrix} tI \\ (1-e^{-t/\varepsilon})I\end{pmatrix} \overline{u}( t,\varepsilon,\widetilde{l},\overline{l})\,dt \end{equation} \tag{11.3} $$
and
$$ \begin{equation} \overline{u}( t,\varepsilon,\widetilde{l},\overline{l})=\frac{t (\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l})+ (1-e^{-t/\varepsilon})\varepsilon\overline{l}} {\|t (\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}) +(1-e^{-t/\varepsilon})\varepsilon\overline{l}\|}. \end{equation} \tag{11.4} $$

Note that $l_{\varepsilon,\mu,1}$, $l_{\varepsilon,\mu,2}$ is a solution of the original system (6.2) if and only if $\widetilde{l}$ and $\overline{l}$ in (11.1) satisfy equation (11.2).

Equation (11.2) can be transformed into

$$ \begin{equation} \varepsilon \begin{pmatrix}\, \widetilde{l}\ \ \\ \overline{l}\end{pmatrix} =\begin{pmatrix} \mathcal{G}_1(\varepsilon,\mu,\widetilde{l},\overline{l}) \\ \mathcal{G}_2(\varepsilon,\mu,\widetilde{l},\overline{l}) \end{pmatrix}, \end{equation} \tag{11.5} $$
where
$$ \begin{equation} \begin{aligned} \, \mathcal{G}_1(\varepsilon,\mu,\widetilde{l},\overline{l}) &:= 2\mathbf{e}- \varepsilon^2(\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}+\overline{l})- 2\varepsilon\mu\xi-\overline{\mathcal{J}}_1(\varepsilon,\widetilde{l},\overline{l}) \\ &=2\mathbf{e}-2\varepsilon\mu\xi-\overline{\mathcal{J}}_1(\varepsilon,\widetilde{l},\overline{l})+ O(\varepsilon^2+\varepsilon^2\|\overline{l}\|+\varepsilon^3\|\widetilde{l}\|), \\ \mathcal{G}_2(\varepsilon,\mu,\widetilde{l},\overline{l}) &:= 2\mathbf{e}- \varepsilon(\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l})- \varepsilon\mu(1-e^{-2/\varepsilon})\xi -\overline{\mathcal{J}}_2(\varepsilon,\widetilde{l},\overline{l}) \\ &= 2\mathbf{e}-\varepsilon\mathbf{e}-\varepsilon\mu\xi -\overline{\mathcal{J}}_2(\varepsilon,\widetilde{l},\overline{l})+ O(\varepsilon^2+\varepsilon^2\|\widetilde{l}\|) \quad\text{as } \varepsilon+\mu\to0. \end{aligned} \end{equation} \tag{11.6} $$
Note that (11.5) has a unique solution $\varepsilon\widetilde{l}=o(1)$, $\varepsilon\overline{l}= o(1)$ as $\varepsilon+\mu\to0$ and the functions $\mathcal{G}_1$ and $\mathcal{G}_2$ are continuous with respect to $\widetilde{l}$ and $\overline{l}$ for all fixed positive $\varepsilon$ and $\mu$.

Our aim is to show that $\varepsilon\overline{l}=o(1)$ satisfies Assumption 5, that is, $\varepsilon\overline{l}=o(\varepsilon)$ as $\varepsilon+\mu\to0$.

Lemma 1. Equation (11.5) has a solution $\widetilde{l}=o(1)$, $\overline{l}=o(1)$ as $\varepsilon+\mu\to0$.

Proof. 1. We estimate the image of the small ball
$$ \begin{equation} \biggl\|\begin{pmatrix} \, \widetilde{l} \ \ \\ \overline{l}\end{pmatrix}\biggr\|\leqslant r=r(\varepsilon,\mu)\to0 \quad\text{as } \varepsilon+\mu\to0 \end{equation} \tag{11.7} $$
under the map
$$ \begin{equation*} \mathcal{G}(\varepsilon,\mu,\widetilde{l},\overline{l}):= \begin{pmatrix} \mathcal{G}_1(\varepsilon,\mu,\widetilde{l},\overline{l}) \\ \mathcal{G}_2(\varepsilon,\mu,\widetilde{l},\overline{l}) \end{pmatrix}. \end{equation*} \notag $$
Changing the integration variable to $t=\varepsilon\tau$ in $\overline{\mathcal{J}}_1$ and $\overline{\mathcal{J}}_2$ in (11.3), we have
$$ \begin{equation} \begin{aligned} \, \overline{\mathcal{J}}_1 &= \varepsilon^2\int_{0}^{2/\varepsilon}\tau \overline{u}(\varepsilon\tau,\varepsilon,\widetilde{l},\overline{l})\,d\tau, \\ \overline{\mathcal{J}}_2 &= \varepsilon\int_{0}^{2/\varepsilon}(1-e^{-\tau})\overline{u}(\varepsilon\tau,\varepsilon,\widetilde{l},\overline{l})\,d\tau. \end{aligned} \end{equation} \tag{11.8} $$
Next, by (11.4), (9.10) and (11.7),
$$ \begin{equation*} \begin{aligned} \, \overline{u}(\varepsilon\tau,\varepsilon,\widetilde{l},\overline{l}) &= \frac{\varepsilon\tau (\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l})+ (1-e^{-\tau})\varepsilon\overline{l}} {\|\varepsilon\tau (\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l})+ (1-e^{-\tau})\varepsilon\overline{l}\|}= \frac{\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}+g(\tau)\overline{l}} {\|\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}+g(\tau)\overline{l}\|} \\ &=(\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}+g(\tau)\overline{l}) (1+4\varepsilon+4\varepsilon^2+2\varepsilon\langle\mathbf{e},\widetilde{l}\rangle+ 4\varepsilon^2\langle\mathbf{e},\widetilde{l}\rangle+\varepsilon^2\|\widetilde{l}\|^2 \\ &\qquad +2g(\tau)\langle\mathbf{e},\overline{l}\rangle +4\varepsilon g(\tau)\langle\mathbf{e},\overline{l}\rangle+g^2(\tau)\|\overline{l}\|^2 +2\varepsilon g(\tau)\langle\widetilde{l},\overline{l}\rangle)^{-1/2} \\ &= (\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}+g(\tau)\overline{l}) (1+4\varepsilon+2\varepsilon\langle\mathbf{e},\widetilde{l}\rangle+ 2g(\tau)\langle\mathbf{e},\overline{l}\rangle+ 4\varepsilon g(\tau)\langle\mathbf{e},\overline{l}\rangle \\ &\qquad +g^2(\tau)\|\overline{l}\|^2+O(\varepsilon^2+\varepsilon^2r+\varepsilon r^2))^{-1/2} \\ &=(\mathbf{e}+2\varepsilon\mathbf{e}+\varepsilon\widetilde{l}+g(\tau)\overline{l}) \biggl(1 -2\varepsilon-\varepsilon\langle\mathbf{e},\widetilde{l}\rangle - g(\tau)\langle\mathbf{e},\overline{l}\rangle - 2\varepsilon g(\tau)\langle\mathbf{e},\overline{l}\rangle \\ &\qquad -\frac{g^2(\tau)}{2}\|\overline{l}\|^2 +3\varepsilon g(\tau)\langle\mathbf{e},\overline{l}\rangle+ \frac32 g^2(\tau)\langle\mathbf{e},\overline{l}\rangle^2+ O(\varepsilon^2+\varepsilon^2r+\varepsilon r^2)\biggr). \end{aligned} \end{equation*} \notag $$
Therefore,
$$ \begin{equation} \begin{aligned} \, \overline{u}(\varepsilon\tau,\varepsilon,\widetilde{l},\overline{l}) &=\mathbf{e}+\varepsilon(\widetilde{l}-\langle\mathbf{e},\widetilde{l}\rangle\mathbf{e}) +g(\tau)(\overline{l}-\langle\mathbf{e},\overline{l}\rangle\mathbf{e}) \nonumber \\ &\qquad +g(\tau)O(\varepsilon r)+g^2(\tau)O(r^2) +O(\varepsilon^2+\varepsilon^2r+\varepsilon r^2). \end{aligned} \end{equation} \tag{11.9} $$
An appeal to (9.11)(9.15), (11.8) and (11.9) shows that
$$ \begin{equation} \begin{aligned} \, \overline{\mathcal{J}}_1 &=2\mathbf{e}+2\varepsilon(\widetilde{l}-\langle\mathbf{e},\widetilde{l}\rangle\mathbf{e}) +2\varepsilon(\overline{l}-\langle\mathbf{e},\overline{l}\rangle\mathbf{e}) +O(\varepsilon^2+\varepsilon^2r+\varepsilon r^2), \\ \overline{\mathcal{J}}_2 &=2\mathbf{e}-\varepsilon\mathbf{e}+2\varepsilon(\widetilde{l}- \langle\mathbf{e},\widetilde{l}\rangle\mathbf{e}) +\varepsilon (D_{2,1}-\log\varepsilon+\log 2)(\overline{l}- \langle\mathbf{e},\overline{l}\rangle\mathbf{e}) \\ &\qquad +\varepsilon^2|{\log\varepsilon}|O(r)+O(\varepsilon^2+\varepsilon^2r+\varepsilon r^2). \end{aligned} \end{equation} \tag{11.10} $$
Thus, from (11.6) and (11.10) we obtain
$$ \begin{equation*} \mathcal{G}_1(\varepsilon,\mu,\widetilde{l},\overline{l}) =-2\varepsilon\bigl(\mu\xi+(\widetilde{l}-\langle\mathbf{e},\widetilde{l}\rangle\mathbf{e}) +(\overline{l}-\langle\mathbf{e},\overline{l}\rangle\mathbf{e}) +O(\varepsilon+\varepsilon r+r^2)\bigr) \end{equation*} \notag $$
and
$$ \begin{equation*} \mathcal{G}_2(\varepsilon,\mu,\widetilde{l},\overline{l}) =-\varepsilon\bigl(\mu\xi+2(\widetilde{l}-\langle\mathbf{e},\widetilde{l}\rangle\mathbf{e}) +\overline{D}_\varepsilon(\overline{l}-\langle\mathbf{e},\overline{l}\rangle\mathbf{e}) +O(\varepsilon+\varepsilon|{\log\varepsilon}| r+r^2)\bigr) \end{equation*} \notag $$
as $\varepsilon+\mu\to0$. Here $\overline{D}_\varepsilon:= D_{2,1}-\log\varepsilon+\log 2$. Note that $\overline{D}_\varepsilon=D_\varepsilon-1$ (see (10.3)).

2. Equation (11.5) is equivalent to

$$ \begin{equation} \begin{pmatrix}\,\widetilde{l}\ \ \\ \overline{l}\end{pmatrix}+ \mathcal{F}_\varepsilon \begin{pmatrix}\,\widetilde{l}\ \ \\ \overline{l}\end{pmatrix} = H(\varepsilon,\mu,\widetilde{l},\overline{l}), \end{equation} \tag{11.11} $$
where
$$ \begin{equation} \mathcal{F}_\varepsilon \begin{pmatrix}\,\widetilde{l}\ \ \\ \overline{l}\end{pmatrix} :=\begin{pmatrix} 2(\widetilde{l}-\langle\mathbf{e},\widetilde{l}\rangle\mathbf{e})+ 2 (\overline{l}-\langle\mathbf{e},\overline{l}\rangle\mathbf{e})) \\ 2(\widetilde{l}-\langle\mathbf{e},\widetilde{l}\rangle\mathbf{e})+ \overline{D}_\varepsilon(\overline{l}-\langle\mathbf{e},\overline{l}\rangle\mathbf{e}) \end{pmatrix}, \end{equation} \tag{11.12} $$
and the vector-valued function
$$ \begin{equation} H(\varepsilon,\mu,\widetilde{l},\overline{l}):=\frac1\varepsilon \mathcal{G}(\varepsilon,\mu,\widetilde{l},\overline{l}) + \mathcal{F}_\varepsilon \begin{pmatrix}\,\widetilde{l}\ \ \\ \overline{l}\end{pmatrix}= \mu f+O(\varepsilon+\varepsilon|{\log\varepsilon}|r+r^2) \end{equation} \tag{11.13} $$
is continuous with respect to $\widetilde{l}$ and $\overline{l}$ for all fixed positive $\varepsilon$ and $\mu$. Here $f=(-2\xi,-\xi)^\top $.

3. We claim that the operator $I+\mathcal{F}_\varepsilon$ is invertible. Indeed, consider the system

$$ \begin{equation} \begin{cases} \widetilde{l}+2(\widetilde{l}-\langle\mathbf{e},\widetilde{l}\rangle\mathbf{e})+ 2 (\overline{l}-\langle\mathbf{e},\overline{l}\rangle\mathbf{e})=\omega_1, \\ \overline{l}+2(\widetilde{l}-\langle\mathbf{e},\widetilde{l}\rangle\mathbf{e})+ \overline{D}_\varepsilon(\overline{l}-\langle\mathbf{e},\overline{l}\rangle\mathbf{e})=\omega_2; \end{cases} \end{equation} \tag{11.14} $$
note that by (11.12) systems (11.11) and (11.14) have equal left-hand sides. Multiplying both equations in (11.14) by $\mathbf{e}$ we find that
$$ \begin{equation*} \langle\mathbf{e},\widetilde{l}\rangle=\langle\mathbf{e},\omega_1\rangle=:\beta_1\quad\text{and} \quad \langle\mathbf{e},\overline{l}\rangle=\langle\mathbf{e},\omega_2\rangle=:\beta_2. \end{equation*} \notag $$
Now (11.14) is equivalent to
$$ \begin{equation*} \begin{cases} 3\widetilde{l}+2\overline{l} =\omega_1+2\beta_1\mathbf{e}+2\beta_2\mathbf{e}, \\ 2\widetilde{l}+D_\varepsilon\overline{l} =\omega_2+2\beta_1\mathbf{e}+\overline{D}_\varepsilon\beta_2\mathbf{e}. \end{cases} \end{equation*} \notag $$
From this system we obtain
$$ \begin{equation} \begin{aligned} \, \widetilde{l} &=\frac{2\omega_2- D_\varepsilon\omega_1+2\beta_1(2-D_\varepsilon)\mathbf{e}-2\beta_2\mathbf{e}} {4-3 D_\varepsilon}, \\ \overline{l}&=\frac{2\omega_1- 3\omega_2-2\beta_1\mathbf{e}} {4-3 D_\varepsilon}+\beta_2\mathbf{e}. \end{aligned} \end{equation} \tag{11.15} $$
Therefore, the operator $I+\mathcal{F}_\varepsilon$ is invertible. From (11.15), in view of (10.14) we deduce that
$$ \begin{equation} \widetilde{l}=O(\|\omega_1\|+\|\omega_2\|)\quad\text{and} \quad \overline{l}=O(\|\omega_1\|+\|\omega_2\|). \end{equation} \tag{11.16} $$

4. By step 3 of the proof, equation (11.11) is equivalent to

$$ \begin{equation} \begin{pmatrix}\,\widetilde{l}\ \ \\ \overline{l}\end{pmatrix} =\widetilde{H}(\varepsilon,\mu,\widetilde{l},\overline{l}), \end{equation} \tag{11.17} $$
where $\widetilde{H}$ is continuous in $\widetilde{l}$ and $\overline{l}$ for all fixed positive $\varepsilon$ and $\mu$, and so by (11.13) and (11.16) we have the estimate
$$ \begin{equation} \widetilde{H}(\varepsilon,\mu,\widetilde{l},\overline{l}) =O(\mu+\varepsilon+\varepsilon|{\log\varepsilon}|r+r^2). \end{equation} \tag{11.18} $$

Consider the map $\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l}):= \widetilde{H}(\varepsilon,\mu,\widetilde{l},\overline{l})$. We claim that $\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l})$ maps the ball $B[0,\sqrt{\varepsilon|{\log\varepsilon}|+\mu}]$ (with centre at the origin and radius $\sqrt{\varepsilon|{\log\varepsilon}|+\mu}$) to a subset of the same ball. Indeed, putting $r(\varepsilon,\mu)=\sqrt{\varepsilon|{\log\varepsilon}|+\mu}$ in (11.7), by (11.18) we have

$$ \begin{equation*} \begin{aligned} \, \|\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l})\| &\leqslant K_1(\mu+\varepsilon+\varepsilon|{\log\varepsilon}| \sqrt{\varepsilon|{\log\varepsilon}|+\mu}+\varepsilon|{\log\varepsilon}|+\mu) \\ &\leqslant\sqrt{\varepsilon\log\varepsilon+\mu}\cdot K_2(\sqrt{\varepsilon\log\varepsilon+\mu}+\varepsilon|{\log\varepsilon}|) \end{aligned} \end{equation*} \notag $$
for some positive constants $K_1$ and $K_2$. Now if $\varepsilon,\mu>0$ are sufficiently small so that $K_2(\sqrt{\varepsilon|{\log\varepsilon}|+\mu}+\varepsilon|{\log\varepsilon}|)\leqslant 1$, then $\|\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l})\| \leqslant\sqrt{\varepsilon|{\log\varepsilon}|+\mu}$, which proves the claim.

Hence by the Schauder–Tikhonov theorem (see, for example, [23], Ch. 16, § 3, Theorem 1) the map $\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l})$, has a fixed point in $B[0,\sqrt{\varepsilon|{\log\varepsilon}|+\mu}]$, which is a small solution of equation (11.17), and therefore of (11.5).

Lemma 1 is proved.

By Lemma 1 we have $\varepsilon\widetilde{l}=o(\varepsilon)$ and $\varepsilon\overline{l}=o(\varepsilon)$ as $\varepsilon+\mu\to0$, and therefore, since the small solution of (11.5) is unique, so is the fixed point provided by the Schauder–Tikhonov theorem. Hence $\widetilde{l}=o(1)$ and $\overline{l}=o(1)$, and therefore $\varepsilon\overline{l}=o(\varepsilon)$ as $\varepsilon+\mu\to0$ and the small vector $l_{\varepsilon,\mu,2}$ (11.1) satisfies Assumption 5.

§ 12. Construction of the asymptotic expansion for the defining vector in the singular case

12.1.

Note that by Lemma 1 the asymptotic equality (10.16) holds a fortiori for the vector $v_\omega=(\widetilde{l}, \widehat{\lambda}, \widehat{\rho})^\top $, whose component are uniquely defined by

$$ \begin{equation} \varepsilon\widetilde{l}=l_{\varepsilon,\mu,1} - \mathbf{e} -2\varepsilon\mathbf{e}, \qquad l_{\varepsilon,\mu,2}=\varepsilon\widehat{\lambda}l_{\varepsilon,\mu,1}+ \varepsilon\|l_{\varepsilon,\mu,1}\|\widehat{\rho}, \qquad \langle l_{\varepsilon,\mu,1},\widehat{\rho}\rangle=0. \end{equation} \tag{12.1} $$
We also note that $\omega$ is, at the end, a function of $\varepsilon$ and $\mu$, and so $v_\omega$ is $v(\varepsilon,\mu)$.

12.2. Proof of Theorem 4

In what follows we frequently omit the arguments $\varepsilon$ and $\mu$ of the vector $v(\varepsilon,\mu)$ and its new approximations.

The series in (10.16) are convergent, and can be estimated using (4.4) and (4.5). For the vector $v$, by (10.16) and (4.4) we have

$$ \begin{equation*} \|v\|=O(\|\omega\|+\|\omega\|^2+\|\omega\|\,\|v\|+\|v\|^2), \end{equation*} \notag $$
that is, for some positive constant $K$
$$ \begin{equation*} \|v\|\leqslant K(\|\omega\|+\|\omega\|^2+\|\omega\|\,\|v\|+\|v\|^2), \end{equation*} \notag $$
so that
$$ \begin{equation*} \|v\|(1-K\|\omega\| - K\|v\|) \leqslant K(\|\omega\|). \end{equation*} \notag $$
Since $1-K\|\omega\| - K\|v\|\to 1$ as $\varepsilon+\mu\to 0$, we have $\|v\|=O(\|\omega\|)$.

Set

$$ \begin{equation*} v_1=v_1(\varepsilon,\mu):= P_1\biggl(\varepsilon, \frac{\varepsilon}{W(\varepsilon)},\mu, \frac{\mu}{W(\varepsilon)}\biggr). \end{equation*} \notag $$
Now for $\widetilde{v}_2(\varepsilon,\mu):= v_{\varepsilon,\mu} - v_1(\varepsilon,\mu)$, from (10.16) and (4.3) we obtain the new equality
$$ \begin{equation*} \widetilde{v}_2 \stackrel{\mathrm{as}}{=} R(\omega;2)+R(\omega\,|\, P_1(\omega)+\widetilde{v}_2)= P_2(\omega)+R(\omega;3)+R(\omega\,|\, \widetilde{v}_2;2) \end{equation*} \notag $$
and (similarly to the above) the new estimate
$$ \begin{equation*} \|v-v_1\|=\|\widetilde{v}_2\|=O(\|\omega\|^2). \end{equation*} \notag $$

Using induction, for each $N\in\mathbb{N}$, $N>2$, we obtain the equality

$$ \begin{equation*} \widetilde{v}_N \stackrel{\mathrm{as}}{=} P_N(\omega)+R(\omega;N+1)+R(\omega\,|\, \widetilde{v}_N;2) \end{equation*} \notag $$
and the estimate
$$ \begin{equation*} \biggl\| v-\sum_{i=1}^{N-1}v_i\biggr\|=\|\widetilde{v}_N\|=O(\|\omega\|^N). \end{equation*} \notag $$
We now have
$$ \begin{equation*} \|\omega\|=\varepsilon+\varepsilon |{\log\varepsilon}|+\frac{\varepsilon}{|W(\varepsilon)|}+\mu+ \frac{\mu}{|W(\varepsilon)|}\leqslant K(\varepsilon |{\log\varepsilon}|+\mu)=O(\varepsilon |{\log\varepsilon}|+\mu). \end{equation*} \notag $$
Hence
$$ \begin{equation*} \|\omega\|^N \leqslant K(\varepsilon |{\log\varepsilon}|+ \mu)^N \leqslant K_1 (\varepsilon^N |{\log\varepsilon}|^N+\mu^N)= O(\varepsilon^N |{\log\varepsilon}|^N+\mu^N). \end{equation*} \notag $$
However,
$$ \begin{equation*} \frac{\varepsilon^N |{\log\varepsilon}|^N+\mu^N}{\varepsilon^{N-1}+\mu^{N-1}}= \varepsilon|{\log\varepsilon}|^N \frac{\varepsilon^{N-1}}{\varepsilon^{N-1}+\mu^{N-1}}+ \mu \frac{\mu^{N-1}}{\varepsilon^{N-1}+\mu^{N-1}} \to 0, \end{equation*} \notag $$
and so $O(\|\omega\|^N)=o(\varepsilon^{N-1}+\mu^{N-1})$ as $\varepsilon+\mu\to 0$.

Now Theorem 4 follows from (9.5), (9.6) and (12.1).

Using (10.7), (10.10)(10.15) and (12.1), we find, in particular, an asymptotic representation for the vectors $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ up to order $o(\varepsilon^2+\mu^2)$ as $\varepsilon+\mu\to 0$.

§ 13. Conclusions

A study of an optimal control problem on a fixed time interval for an autonomous linear system with two independent small positive parameters $\varepsilon$ and $\mu$ has been performed, where $\varepsilon$ multiplies some derivatives in the equations of the system and $\mu$ is involved in the initial conditions. The control is subject to smooth geometric constraints in the form of a ball. The quality functional is convex and terminal, and depends only on the values of the slow variables at the terminal time.

An asymptotic formula for the vector defining the optimal control as the small parameters tend independently to zero has been found.

We have also obtained full asymptotic expansions of the solution in the regular case, where the optimal control in the limiting problem is continuous, and in the singular case, with a singularity in the optimal control. It has been shown that in the regular case the solution expands in a power series in $\varepsilon$ and $\mu$, whereas in the singular case the asymptotic behaviours of the solution shows a more involved dependence on $\varepsilon$ and $\mu$ (in both cases, with respect to the standard gauge sequence $\varepsilon^k+\mu^k$) as $\varepsilon+\mu\to0$.


Bibliography

1. L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The mathematical theory of optimal processes, Intersci. Publ. John Wiley & Sons, Inc., New York–London, 1962, viii+360 pp.  mathscinet  mathscinet  zmath  zmath
2. N. N. Krasovskii, Theory of control of motion: Linear systems, Nauka, Moscow, 1968, 475 pp. (Russian)  mathscinet  zmath
3. E. B. Lee and L. Markus, Foundations of optimal control theory, John Wiley & Sons, Inc., New York–London–Sydney, 1967, x+576 pp.  mathscinet  zmath
4. A. R. Danilin and A. M. Il'in, “The asymptotics of the solution of a time-optimal problem with perturbed initial conditions”, J. Comput. Syst. Sci. Int., 33:6 (1995), 67–74  mathscinet  zmath
5. A. R. Danilin and A. M. Il'in, “The structure of the solution of a perturbed time-optimal control problem”, Fundam. Prikl. Mat., 4:3 (1998), 905–926 (Russian)  mathnet  mathscinet  zmath
6. M. G. Dmitriev and G. A. Kurina, “Singular perturbations in control problems”, Autom. Remote Control, 67:1 (2006), 1–43  mathnet  crossref  mathscinet  zmath
7. G. A. Kurina and M. A. Kalashnikova, “Singularly perturbed problems with multi-tempo fast variables”, Autom. Remote Control, 83:11 (2022), 1679–1723  mathnet  crossref  mathscinet  zmath
8. È. M. Galeev and V. M. Tikhomirov, A short course on the theory of extremal problems, Moscow University, Moscow, 1989, 204 pp. (Russian)  zmath
9. A. M. Il'in and O. O. Kovrizhnykh, “The asymptotic behavior of solutions to systems of linear equations with two small parameters”, Dokl. Math., 69:3 (2004), 336–337  mathnet  mathscinet  zmath
10. A. R. Danilin and O. O. Kovrizhnykh, “Time-optimal control of a small mass point without environmental resistance”, Dokl. Math., 88:1 (2013), 465–467  crossref  mathscinet  zmath
11. P. V. Kokotovic and A. H. Haddad, “Controllability and time-optimal control of systems with slow and fast modes”, IEEE Trans. Automat. Control, 20:1 (1975), 111–113  crossref  mathscinet  zmath
12. A. R. Danilin and A. A. Shaburov, “Asymptotics of solutions of linear singularly perturbed optimal control problems with a convex integral performance index and a cheap control”, Differ. Equ., 59:1 (2023), 87–102  crossref  mathscinet  zmath
13. A. L. Dontchev, Perturbations, approximations and sensitivity analysis of optimal control systems, Lect. Notes Control Inf. Sci., 52, Springer-Verlag, Berlin, 1983, iv+158 pp.  crossref  mathscinet  zmath
14. A. R. Danilin and O. O. Kovrizhnykh, “On the dependence of the time-optimality problem for a linear system on two small parameters”, Vestn. Chelyab. Gos. Univ. Mat. Mekh. Inform., 2011, no. 27, 46–60 (Russian)  mathnet  mathscinet
15. R. T. Rockafellar, Convex analysis, Princeton Math. Ser., 28, Princeton Univ. Press, Princeton, NJ, 1970, xviii+451 pp.  mathscinet  zmath
16. A. R. Danilin and O. O. Kovrizhnykh, “Asymptotic expansion of the solution to an optimal control problem for a linear autonomous system with a terminal convex quality index depending on slow and fast variables”, Izv. Inst. Mat. Inform., Udmurt. Gos. Univ., 61 (2023), 42–56 (Russian)  mathnet  crossref  mathscinet  zmath
17. A. Erdélyi and M. Wyman, “The asymptotic evaluation of certain integrals”, Arch. Ration. Mech. Anal., 14:1 (1963), 217–260  crossref  mathscinet  zmath  adsnasa
18. H. Cartan, Calcul différentiel, Hermann, Paris, 1967, 178 pp.  mathscinet  zmath; Formes différentielles. Applications élémentaires au calcul des variations et à la théorie des courbes et des surfaces, Hermann, Paris, 1967, 186 pp.  mathscinet  zmath
19. A. R. Danilin, “Asymptotics of the optimal value of the performance functional for a rapidly stabilizing indirect control in the regular case”, Differ. Equ., 42:11 (2006), 1545–1552  mathnet  crossref  mathscinet  zmath
20. A. R. Danilin, “Asymptotic behaviour of bounded controls for a singular elliptic problem in a domain with a small cavity”, Sb. Math., 189:11 (1998), 1611–1642  mathnet  crossref  mathscinet  zmath  adsnasa
21. A. M. Il'in and A. R. Danilin, Asymptotic methods in analysis, Fizmatlit, Moscow, 2009, 248 pp. (Russian)  zmath
22. A. R. Danilin, “Asymptotic behavior of the optimal cost functional for a rapidly stabilizing indirect control in the singular case”, Comput. Math. Math. Phys., 46:12 (2006), 2068–2079  mathnet  crossref  mathscinet  zmath  adsnasa
23. L. V. Kantorovich and G. P. Akilov, Functional analysis, 3d ed., Nauka, Moscow, 1984, 752 pp.  mathscinet  zmath; English transl of 2nd ed., Pergamon Press, Oxford–Elmsford, NY, 1982, xiv+589 pp.  mathscinet  zmath

Citation: A. R. Danilin, O. O. Kovrizhnykh, “Asymptotics of a solution to a terminal control problem with two small parameters”, Sb. Math., 216:8 (2025), 1092–1120
Citation in format AMSBIB
\Bibitem{DanKov25}
\by A.~R.~Danilin, O.~O.~Kovrizhnykh
\paper Asymptotics of a~solution to a~terminal control problem with two small parameters
\jour Sb. Math.
\yr 2025
\vol 216
\issue 8
\pages 1092--1120
\mathnet{http://mi.mathnet.ru/eng/sm10072}
\crossref{https://doi.org/10.4213/sm10072e}
\mathscinet{https://mathscinet.ams.org/mathscinet-getitem?mr=4973730}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001601091700003}
Linking options:
  • https://www.mathnet.ru/eng/sm10072
  • https://doi.org/10.4213/sm10072e
  • https://www.mathnet.ru/eng/sm/v216/i8/p82
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025