Abstract:
An optimal control problem is considered in the class of piecewise continuous controls with smooth geometric constraints on a fixed interval of time, for a linear autonomous system with two small positive independent parameters, one of which, $\varepsilon$, multiplies some derivatives in the equations of the system, while the other, $\mu$, is involved in the initial conditions. The quality functional is convex and terminal, and depends only on the values of the slow variables at the terminal time.
A limit relation as the small parameters tend independently to zero is verified for the vector describing the optimal control.
Two cases are considered: the regular case, when the optimal control in the limiting problem is continuous, and the singular case, when this control has a singularity.
In the regular case the solution is shown to expand in a power series in $\varepsilon$ and $\mu$, while in the singular case the solution is asymptotically represented by an Erdélyi series — in either case the asymptotics is with respect to the standard gauge sequence $\varepsilon^k+\mu^k$, as $\varepsilon +\mu \to0$.
Bibliography: 23 titles.
We study an optimal control problem (see [1]–[3]) for an autonomous function system with slow and fast variables on a fixed interval of time in the class of piecewise continuous controls with smooth geometric constraints (see [4]–[7]). The quality functional is convex and terminal (see [8]), and depends only on the values of the slow variables at the terminal time. A particular feature of the problem is that the system contains two small positive parameters, one of which multiplies some derivatives in equations of the system and the other is involved in the initial conditions.
Usually, problems with several small parameters are considered when one parameter is subordinate to the other, and the parameter domain is partitioned into parts on which particular asymptotic expansions are written. Arlen Il’in’s suggestion was to find asymptotic expansions of solutions of various problems that apply to the whole range of the parameters. Thus, for one of these authors he stated the problem of finding an asymptotic formula for a linear system of two ordinary differential equations with two small parameters multiplying derivatives, which would apply equally well to all possible relations between these parameters. The corresponding results were published in [9].
In the early 1990s Il’in initiated the study of the time-optimal problem with control in a ball in Euclidean space and a small perturbation $\varepsilon$ of the initial data such that the limiting problem has a solution with discontinuous control and the control in the perturbed problem is continuous. It was shown in [4] and [5] that the asymptotic behaviour of the solution is described by an Erdélyi series in the power sequence $\varepsilon^k$, whose terms are rational expressions in terms of a certain special function and its logarithm and are not rational functions of $\varepsilon$ and $\log\varepsilon$.
Subsequently, these authors joined forces to obtain full asymptotic expansions of solutions of problems in control theory, including, in particular, ones with two independent small parameters.
At a conference in Chelyabinsk, six months before Il’in’s 80th birthday, Nazaikinskii (now a corresponding member of the Russian Academy of Sciences), who was the chairman of the section, asked us the following question concerning our talk: what can be said about the asymptotic behaviour of the solution of the time-optimal problem if the control system does not satisfy Vasil’eva’s standard condition (that the matrix at the fast variables has eigenvalues with negative real parts). Il’in also expressed interest in knowing the answer. Our advance in this direction was a paper treating the control of a point of small mass in a medium without resistance. There, generically, the solution was expanded into an asymptotic Erdélyi series in powers $\varepsilon^{k/2}$, whose terms have an involved form, similar to the first papers [4] and [5]. After we had reported on these results at the seminar of the Department of Equations of Mathematical Physics of the Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences, on February 7, 2013 Il’in presented our note [10] for publishing in the journal Doklady Akademii Nauk;1[x]1Translated into English as Doklady Mathematics. this was one of the last notes presented by Academician Il’yin for publishing in Doklady Akademii Nauk.
The present paper combines the two areas of research of the authors, namely, involved asymptotic expressions with respect to two independent small parameters.
§ 2. Statement of the problem and defining relations
Consider the indirect control problem for a linear autonomous system
where $x_{\varepsilon,\mu}\in\mathbb{R}^{\mathrm{k}}$, $y_{\varepsilon,\mu}, u_{\varepsilon,\mu}\in\mathbb{R}^\mathrm{m}$; $A$ and $B$ are constant real matrices of an appropriate size; $x_{\varepsilon,\mu}(\,{\cdot}\,;u)$ is the component of the solution of system (2.1) for fixed $u$.
Here and in what follows $\|\cdot\|$ denotes the Euclidean norm on the space under consideration, and $\langle\,\cdot\,{,}\,\cdot\,\rangle$ is the inner product. We use the superscript ‘$\top$’ to denote the transpose of a matrix; for example, $B^\top$ is the transpose of $B$.
Assumption 1. The system in (2.6) is completely controllable; in view of Kalman’s criterion (see [3], § 2.3, Theorem 5); this is equivalent to the condition
$$
\begin{equation}
\sigma\colon \mathbb{R}^{\mathrm{n}} \to \mathbb{R} \text{ is a convex continuously differentiable function on } \mathbb{R}^{\mathrm{n}} .
\end{equation}
\tag{2.8}
$$
Let the following assumption hold.
Assumption 3. The system in (2.7) is completely controllable, that is,
Let $\Xi(z^0,T)\subseteq\mathbb{R}^{\mathrm{n}}$ be the attainable set at time $T$ of the system in (2.7). According to [12], § 3, under Assumption 3 problem (2.7) is normal, that is, any two controls taking the initial point $z^0$ to the same boundary point of $\Xi(z^0,T)$ coincide almost everywhere on $[0,T]$. Hence $\Xi(z^0,T)$ is a strictly convex compact set (see, for example, [3], § 2.2, Theorem 3). Therefore, by (2.8) there exists a point $z_{\min}\in \Xi(z^0,T)$ such that
The Pontryagin maximum principle for problem (2.7), (2.8) (see, for example, [8], § 6.1.3) gives a necessary condition for optimality: if $u^{\mathrm{opt}}$ is an optimal control in problem (2.7), then there exists a vector $L\in\mathbb{R}^{\mathrm{n}}$ (the value of the conjugate variable of the maximum principle at $T$) such that the solution $z(\,{\cdot}\,;u^{\mathrm{opt}})$ of the problem
where $C^\top (t) := \mathcal{B}^\top e^{t\mathcal{A}^\top }$.
Let Assumption 3 be met and $L\not=0$ be an arbitrary vector in $\mathbb{R}^\mathrm{n}$. Then ${C^\top (T-t)L}$ has only a finite number of zeros on $[0,T]$. Consider the control $u(\,{\cdot}\,;L)$ defined by
where the control $u(t;L)$ is defined by (2.12), and $z(\,{\cdot}\,;u)$ is the solution of problem (2.9) for the given control $u(\,{\cdot}\,)$. Then $L$ is a unique solution of equation (2.13), and $u(\,{\cdot}\,;L)$ is a unique optimal control $u^{\mathrm{opt}}$ in problem (2.7), (2.8).
Note that the unique $L$ and $u(\,{\cdot}\,;L)$ obtained from (2.13), (2.12) satisfy the Pontryagin maximum principle (2.9)–(2.11).
Remark 1. Equation (2.13) has a nonzero solution if and only if
then any control leading to the point $z_{\mathrm{glmin}}$ is optimal but not extremal, and (2.9)–(2.11) holds for $L=0$ (the degenerate case).
Definition 1. A nonzero vector $L$ satisfying (2.13) is called a defining vector, because by Theorem 1 it defines the unique optimal control in problem (2.7), (2.8) by formula (2.12).
Returning to the original problem (2.1), (2.2), we note that $\sigma(z)\equiv\varphi(x)$. Hence
where $\mathcal{C}_{1,\varepsilon}(t):= [e^{\mathcal{A}_\varepsilon t}\mathcal{B}_\varepsilon]_1$, and $[\,{\cdot}\,]_1$ denotes the first $\mathrm{k}$ rows of this matrix.
In this setting $\breve{l}_{\varepsilon,\mu}$ can naturally be called the defining vector in problem (2.1), (2.2), because it is the only vector which satisfies equation (2.17) and uniquely defines the optimal control by (2.18).
Similarly, under Assumption 1 the vector $l_0\in\mathbb{R}^\mathrm{k}$,
For the function $\varphi$ in (2.2) we have $\nabla \varphi(\zeta)=\zeta$, and so using Cauchy’s formula for a solution of the linear system of differential equations (2.1) (where we change $T-t$ for $t$ in the integral) we can write the main equation (2.17) for the vector $\breve{l}_{\varepsilon,\mu}$ as
Our aim here is to find the asymptotic behaviour of the defining vector $l_{\varepsilon,\mu}$ as $\varepsilon+\mu\to 0$.
§ 3. The limit theorem
Let $\Xi_0(x^0,T)$ be the attainable set of the control system (2.6) from a point $x^0$ at time $T$. Following § 3.4.1 in [13], for system (2.3) consider the set
because the set of admissible controls $\mathcal{V}$ in problem (2.1), (2.2) is the unit ball $B[0,1]$ in $\mathbb{R}^\mathrm{m}$.
Let $\mathcal{K}_{\varepsilon,\mu}:=\Xi_\varepsilon(z_\mu^0,T)$ be the attainable set of the control system (2.3) from the initial state $z_\mu^0$ (see (2.5)) at time $T$. According to [3], § 2.2, Theorem 1, and [13], § 3.4.1, $\Xi_0(x^0,T)$, $\mathcal{K}_0$ and $\mathcal{K}_{\varepsilon,\mu}$ are compact sets. In Lemma 3 of [14] it was shown that
Conditions (2.16) and (2.19) are important for the terminal functional $\varphi$ under consideration (see (2.2)). This function is continuously differentiable and strictly convex, and $0\in\mathbb{R}^{\mathrm{k}}$ is the (unique) point of its global minimum. Hence condition (2.14) is equivalent to the condition
The following result is analogous to Proposition 2 in [16] for a problem with two small parameters.
Proposition 1. If condition (3.2) is met, then there exists $\Delta>0$ such that if $\varepsilon$ and $\mu$ satisfy $\varepsilon+\mu<\Delta$ and $z_{\varepsilon,\mu}=(x_{\varepsilon,\mu},y_{\varepsilon,\mu})^\top \in \mathcal{K}_{\varepsilon,\mu}$, then $x_{\varepsilon,\mu}\neq0$, that is, system (2.1) satisfies condition (2.14).
Theorem 2. Let Assumption 1 be fulfilled and (2.19) hold. Then
In the case of two small parameters $\varepsilon$ and $\mu$ there are two standard gauge sequences, $\{\psi_{n,1}(\varepsilon,\mu)\}=\{(\varepsilon+\mu)^n\}$ and $\{\psi_{n,2}(\varepsilon,\mu)\}=\{\varepsilon^n+\mu^n\}$ (that is, $\psi_{n+1,i}(\varepsilon,\mu)/\psi_{n,i}(\varepsilon,\mu)\to 0$ as $\varepsilon+\mu\to 0$).
Following Erdélyi’s definition (see [17], Definition 2.4), which applies also to multivariate asymptotic expressions, a series $\sum_{n=0}^{\infty}w_n(\varepsilon,\mu)$ is called an asymptotic expansion as $\varepsilon+\mu\to 0$ of a function $\mathcal{W}(\varepsilon,\mu)$ with respect to a gauge sequence $\psi_n(\varepsilon,\mu)$ if
$$
\begin{equation*}
\mathcal{W}(\varepsilon,\mu) \stackrel{\mathrm{as}}{=} \sum_{n=0}^{\infty}w_n(\varepsilon,\mu) \quad \text{with respect to the gauge sequence } \psi_n(\varepsilon,\mu).
\end{equation*}
\notag
$$
The indication of the gauge sequence $\psi_n(\varepsilon,\mu)$ will be dropped if its form is clear from the context.
We will work with power series of many scalar variables $v_1,\dots,v_n$ with scalar or vector coefficients, that is, each term has the form $\beta v_1^{k_1}\cdots v_n^{k_n}$, $k_j\in\mathbb{N}\cup\{0\}$.
By $R(v_1^{\alpha_1},\dots,v_n^{\alpha_n};i)$ (possibly with indices) we denote power series of $v_1^{\alpha_1},\dots,v_n^{\alpha_n}$ with terms of degree $\geqslant i$, that is, if $\beta (v_1^{\alpha_1})^{k_1}\cdots(v_n^{\alpha_n})^{k_n}$ is a term of the series under consideration, then $\alpha_1 k_1+\dots+\alpha_n k_n\geqslant i$ (we assume that we know the full series at the time of consideration).
Remark 3. A representation of one of the arguments of a power series as a finite-dimensional vector is merely a shorthand for the set of its scalar coordinates in a fixed basis.
where $P_p(v):= P_p(v_1,\dots,v_n)$ are homogeneous polynomials of degree $p$ of $v=(v_1,\dots,v_n)$ with scalar or vector coefficients. In what follows the notation $P_{p,j}(v)$ is also used for various homogeneous polynomials of degree $p$ in the formula under consideration.
Remark 4. Using the concept of a ‘homogeneous polynomial of degree $n$’ of vector arguments (see, for example, [18], Ch. 1, § 6.1), we can define an analogue of a power series with arguments from an arbitrary vector space.
Remark 5. All power series considered in this paper are convergent if their arguments are sufficiently small, that is, they converge in a neighbourhood of the origin.
We note the following relations for power series, which hold for $i,j\geqslant 1$ and can be verified directly by definition:
Remark 6. In general, a power series of $\|\rho\|$ is not a power series of $\rho$.
When justifying asymptotic representations, it will be important to distinguish in a power series groups of independent small quantities and small quantities that eventually turn out to be functions of the former. In this connection we introduce the following notation: we denote by $ R(\omega\,|\,v;i)$ a power series with terms of the form $P_{p_1}(\omega)P_{p_2}(v)$, with the additional condition $p_1+p_2\geqslant i$ and $p_2\geqslant 1$, where $\omega=(\omega_1,\dots,\omega_{n_1})$ and $v=(v_1,\dots,v_{n_2})$.
Remark 7. In view of (4.4) and (4.1) a power series converging in a neighbourhood of zero is an asymptotic expansion of its sum with respect to the standard gauge sequence.
In what follows, by $f_{i,j}$ or $f_i$ we denote constant vectors independent of $\varepsilon$ or $\mu$ (and known at the time of consideration).
§ 5. Asymptotic expansion of the defining vector in the regular case
We first consider the asymptotic behaviour of the defining vector $l_{\varepsilon,\mu}$ under the following condition:
In [19], in the case of one small parameter it was shown that the the defining vector has a power-law asymptotic behaviour. However, the approach of [19] must substantially be redesigned in the case of two small parameter. So, instead of searching the coefficients of an a priori given power series, we will obtain an asymptotic equality with respect to a small vector quantity, from which we will both deduce the form of the asymptotic expansion and justify this expansion.
Theorem 3. Let Assumption 1 be fulfilled and condition (5.1) hold. Then $r_{\varepsilon,\mu}=l_{\varepsilon,\mu} - l_0$ satisfies the asymptotic equality
where $P_n(\varepsilon,\mu)$ is homogeneous polynomial of degree $n$ in $\varepsilon$ and $\mu$ with vector coefficients from $\mathbb{R}^\mathrm{k}$. In particular, $r_1=\varepsilon f_{1,0}+\mu f_{0,1}$.
§ 6. An example of problem (2.1), (2.2) with a singularity in the optimal control (a singular case)
Consider problem (2.1), (2.2) for $\mathrm{k}=2\mathrm{n}$, $\mathrm{m}=\mathrm{n}$
(that is, $x_{\varepsilon,\mu}\in\mathbb{R}^{2\mathrm{n}}$ and $y_{\varepsilon,\mu},u_{\varepsilon,\mu}\in\mathbb{R}^\mathrm{n}$). Here $I$ is the matrix of the identity map $\mathbb{R}^\mathrm{n}\to\mathbb{R}^\mathrm{n}$.
Note that, in view of the criterion, the pair of matrices $A,B$ in (6.1) satisfies Assumption 1, and the pair of matrices $\mathcal{A}_\varepsilon,\mathcal{B}_\varepsilon$ satisfies Assumption 2.
$$
\begin{equation*}
\begin{gathered} \, (I+\varepsilon A)(I+\varepsilon A^\top ) =\begin{pmatrix} I & \varepsilon I \\ 0 & I \end{pmatrix} \begin{pmatrix} I & 0\\ \varepsilon I & I \end{pmatrix} =\begin{pmatrix} (1+\varepsilon^2)I & \varepsilon I \\ \varepsilon I & I \end{pmatrix}, \\ e^{At}=\begin{pmatrix} I & t I\\ 0 & I \end{pmatrix}. \end{gathered}
\end{equation*}
\notag
$$
We claim that we can choose an initial vector $x^0$ so that the denominator in the expression (2.21) for the optimal control in the limiting problem has a unique zero of the first order at the initial point $t=0$; this is equivalent to the relations
Assumption 4. Assume that $\|\xi\|=1$ and $\xi\nparallel \mathbf{e}$.
Theorem 4. Let Assumption 4 be fulfilled and (6.8) and (6.9) hold. Then, as $\varepsilon+\mu\to 0$, the vector $l_{\varepsilon,\mu}$ expands in an asymptotic series with respect to the standard gauge sequence $\{\varepsilon^k+\mu^k\}$. This is a power series in the small vector
where $W(\varepsilon)=K+3\log\varepsilon$ for some known constant $K$ and the $P_k(\omega_{\varepsilon,\mu})$ are homogeneous polynomials of degree $k$ in the components of vector $\omega_{\varepsilon,\mu}$ with vector coefficients from $\mathbb{R}^{2\mathrm{n}}$.
In particular, the asymptotic representation of the vectors $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ up to $o(\varepsilon^2+\mu^2)$ has the form
The optimal control is unique, and therefore its asymptotic expansion can be obtained by substituting the above series into (2.18) and using (2.24) and (12.1).
1. According to [3], § 2.2, Corollary 1, the control (2.12) is extremal, that is, $z_L:= z(T;u(\,{\cdot}\,;L))\in \partial \Xi(z^0,T)$ and $L$ is the outward normal vector to the support hyperplane to the set $\Xi(z^0,T)$ at $z_L$.
Note that if $\sigma(\widetilde{z})=C$, where $C$ satisfies the condition from (7.1), then $\mathrm{int}\mathcal{M}_C$ is a nonempty convex set. Let $\nabla\sigma(\widetilde{z})$ be the outward normal vector (relative to $\mathcal{M}_C$) to the (unique) support hyperplane to the set $\mathcal{M}_C$ at the point $\widetilde{z}$.
Consider an arbitrary nonzero vector $L$.
If $z_L\not=z_{\min}$, then $\sigma(z_L) > \sigma_{\min}$ and $\mathrm{int}\mathcal{M}_{\sigma(z_L)}\cap \Xi(T,z^0) \neq\varnothing$. Hence the (unique) support hyperplane to $\mathcal{M}_{\sigma(z_L)}$ at $z_L$ does not separate $\mathcal{M}_{\sigma(z_L)}$ from $\Xi(T,z^0)$. Hence the vectors $\nabla \sigma(z_L)$ and $L$ are oppositely directed, and, as a result, $L$ does not satisfy (2.13). So if $L\neq0$ satisfies (2.13), then $ z_L=z_{\min}$ and $\nabla \sigma(z_{\min})\neq 0$.
2. Consider now the point $z_{\min}$.
Since $\nabla \sigma(z_{\min})\not=0$, we have $\mathrm{int}\mathcal{M}_{\sigma_{\min}}\not=\varnothing$ and $\mathrm{int}\mathcal{M}_{\sigma_{\min}}\cap \Xi(T,z^0)=\varnothing$. Hence the separation theorem for convex sets applies. Here the unique hyperplane separating $\mathcal{M}_{\sigma_{\min}}$ from $\Xi(T,z^0)$ is orthogonal to $\nabla \sigma(z_{\min})$. Now, if a nonzero vector $L$ is directed oppositely to $\nabla\sigma(z_{\min})$, then $z(T;u(\,{\cdot}\,;L))\,{=}\,z_{\min}$.
If two nonzero vectors $L_1$ and $L_2$ are directed oppositely to $\nabla \sigma(z_{\min})$, then $L_1$ and $L_2$ have the same direction, that is, there exists $\lambda>0$ such that $L_1=\lambda L_2$. Hence $u(\,{\cdot}\,;L_1)=u(\,{\cdot}\,;L_2)$ by (2.12).
To complete the proof we note that, of the nonzero vectors directed oppositely to $\nabla\sigma(z_{\min})$ only one ($L=-\nabla \sigma(z_{\min})$) satisfies equation (2.13).
Assume for a contradiction that for each $\Delta > 0$ there exist $\varepsilon_{\Delta}$ and $\mu_{\Delta}$ such that $\varepsilon_{\Delta}+\mu_{\Delta}\,{<}\,\Delta$ and $z_{\varepsilon_{\Delta},\mu_{\Delta}}\,{=}\, (0,y_{\varepsilon_{\Delta},\mu_{\Delta}})^\top \in \mathcal{K}_{\varepsilon_{\Delta},\mu_{\Delta}}$.
Taking $\Delta_n=1/n$, we find that $\varepsilon_n+\mu_n<1/n\to0$ as $n\to+\infty$ and $z_n:=(0,y_{\varepsilon_n,\mu_n})^\top\!\! \in\! \mathcal{K}_{\varepsilon_n,\mu_n}$. Note that the first $\mathrm{k}$ coordinates of any partial limit ${\overline{z}\!=\!(\overline{x},\overline{y})^\top}$ of $\{z_n\}$ are zero, that is, $\overline{x}=0$, and $\overline{z}\in\mathcal{K}_0$ (see [12], Assertion 3, for $\varepsilon$ replaced by ${\varepsilon+\mu}$). However, in view of (3.1) this contradicts condition (3.2).
First we note that by (2.16) and (2.24) the vector $l_{\varepsilon,\mu}$ is nonzero and bounded as $\varepsilon+\mu\to 0$. Let $\widetilde{l}_0$ be an arbitrary limit point of $ l_{\varepsilon,\mu}$, that is, there exist $\{\varepsilon_n\}$ and $\{\mu_n\}$ such that $\varepsilon_n+\mu_n \to 0$ and $\widetilde{l}_n:= l_{\varepsilon_n,\mu_n}\to \widetilde{l}_0$.
Set $\widehat{l}_n:= \widetilde{l}_n/\|\widetilde{l}_n\|$. Let $\widehat{l}_0$ be an arbitrary limit point of $\widehat{l}_n$. We can assume without loss of generality that $\widehat{l}_n\to \widehat{l}_0$.
After replacing $l_{\varepsilon,\mu}$ by $\widetilde{l}_n$ the terms outside the integral in (2.25) have limits as ${n\to+\infty}$, and so the integral has a finite limit. Therefore,
Let us find this limit. We split $J_n$ into the integrals over $[0,\sqrt{\varepsilon_n}]$ and $[\sqrt{\varepsilon_n}, T]$. The integrand is uniformly bounded, so
Next, the integrand in (7.3) is positively homogeneous in the vector $\widetilde{l}_n$, and so $ J_{n,0}(\widetilde{l}_n)=J_{n,0}(\widehat{l}_n)$. But according to Lemma 4 in [14],
Hence $\widetilde{l}_0\not=0$ in view of condition (2.19) and since $J_{0,0}(\widehat{l}_0)=J_{0,0}(\widetilde{l}_0) $. Thus, $\widetilde{l}_0$ satisfies equation (2.23), which is uniquely solvable by Theorem 1. Hence $\widetilde{l}_0=l_0$. The defining vector $l_{\varepsilon,\mu}$ has a unique limit point $l_0$ as $\varepsilon+\mu\to 0$, and so we eventually have $l_{\varepsilon,\mu}\longrightarrow l_0$ as $\varepsilon+\mu\to 0$.
which is small by Theorem 2. We find an asymptotic equality (with respect to $\varepsilon$, $\mu$ and $r$) generated by the main equation (2.25).
Note that the integrand in (2.25) has substantially different asymptotic expansions in $\varepsilon$ for large $t$ ($t\geqslant\varepsilon^q$, $q\in(0,1)$) and small $t\in(0,\varepsilon^q)$. So to find the asymptotic behaviour of the integral in (2.25) it is natural to use the auxiliary parameter method (see, for example, [20], [21], § 30.II, and [22]).
and $\mathcal{F}(\nu,r)$ is a series which will not be involved in the resulting asymptotic expression for $J$ by virtue of the auxiliary parameter method.
Since $\varepsilon\tau\leqslant \nu$ is small, we can expand $e^{A\varepsilon\tau}$ as a Taylor series in powers of $\varepsilon\tau$. Now from (8.3) we obtain
Consider the second term in (8.5). The integration of the series under the integral sign involves the integration of the coefficients of its terms, which have the form
Since $I+\mathcal{A} > 0$, this operator is invertible, and, so, applying the operator $(I+\mathcal{A})^{-1}$ to the previous asymptotic equality we obtain the main asymptotic equality
where $\mathbb{O}$ is the asymptotic zero with respect to the power-law asymptotic sequence, that is, for each $\gamma>0$ we have $\mathbb{O}=o(\varepsilon^{\gamma})$ as $\varepsilon\to 0$, and where
Let us find asymptotic equalities with respect to these quantities, which are generated by the main equation (6.2) and the additional constraints (9.1), following from (9.6) and (9.7).
Now, recalling (9.5), (9.7), (9.16) and (9.17), and expressing, in particular, $\widehat{r}$ in terms of $\widehat{\rho}$, we write the main system of equations (9.3) in the asymptotic form:
Next, excluding the auxiliary unknown vector $\widetilde{\chi}$ from (10.6), substituting the expression for $\widetilde{\chi}$ from the third equation into the first and second and using equalities (4.2) and (4.3) we have
Remark 8. If Assumption 5 holds, then the resulting asymptotic equality (10.16) holds for $\widetilde{l}, \widehat{\lambda}$ and $ \widehat{\rho}$, in terms of which $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ (the solutions of the original equation (6.2)) can be expressed via (9.1), (9.5) and (10.5). In this case an asymptotic formula for $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ can be obtained from the asymptotic equality (10.16). However, if Assumption 5 is not fulfilled, we cannot assert that the resulting $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ give an asymptotic expansion for the solution of the original equation. So in the next section we verify Assumption 5.
Recalling (9.1), (9.2), (9.5) and (10.5), consider the new unknown vectors ${\widetilde{l}=\widetilde{l}(\varepsilon,\mu)}$ and $\overline{l}=\overline{l}(\varepsilon,\mu)$ defined by
Note that $l_{\varepsilon,\mu,1}$, $l_{\varepsilon,\mu,2}$ is a solution of the original system (6.2) if and only if $\widetilde{l}$ and $\overline{l}$ in (11.1) satisfy equation (11.2).
Note that (11.5) has a unique solution $\varepsilon\widetilde{l}=o(1)$, $\varepsilon\overline{l}= o(1)$ as $\varepsilon+\mu\to0$ and the functions $\mathcal{G}_1$ and $\mathcal{G}_2$ are continuous with respect to $\widetilde{l}$ and $\overline{l}$ for all fixed positive $\varepsilon$ and $\mu$.
Our aim is to show that $\varepsilon\overline{l}=o(1)$ satisfies Assumption 5, that is, $\varepsilon\overline{l}=o(\varepsilon)$ as $\varepsilon+\mu\to0$.
Lemma 1. Equation (11.5) has a solution $\widetilde{l}=o(1)$, $\overline{l}=o(1)$ as $\varepsilon+\mu\to0$.
as $\varepsilon+\mu\to0$. Here $\overline{D}_\varepsilon:= D_{2,1}-\log\varepsilon+\log 2$. Note that $\overline{D}_\varepsilon=D_\varepsilon-1$ (see (10.3)).
where $\widetilde{H}$ is continuous in $\widetilde{l}$ and $\overline{l}$ for all fixed positive $\varepsilon$ and $\mu$, and so by (11.13) and (11.16) we have the estimate
Consider the map $\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l}):= \widetilde{H}(\varepsilon,\mu,\widetilde{l},\overline{l})$. We claim that $\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l})$ maps the ball $B[0,\sqrt{\varepsilon|{\log\varepsilon}|+\mu}]$ (with centre at the origin and radius $\sqrt{\varepsilon|{\log\varepsilon}|+\mu}$) to a subset of the same ball. Indeed, putting $r(\varepsilon,\mu)=\sqrt{\varepsilon|{\log\varepsilon}|+\mu}$ in (11.7), by (11.18) we have
for some positive constants $K_1$ and $K_2$. Now if $\varepsilon,\mu>0$ are sufficiently small so that $K_2(\sqrt{\varepsilon|{\log\varepsilon}|+\mu}+\varepsilon|{\log\varepsilon}|)\leqslant 1$, then $\|\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l})\| \leqslant\sqrt{\varepsilon|{\log\varepsilon}|+\mu}$, which proves the claim.
Hence by the Schauder–Tikhonov theorem (see, for example, [23], Ch. 16, § 3, Theorem 1) the map $\mathcal{H}_{\varepsilon,\mu}(\widetilde{l},\overline{l})$, has a fixed point in $B[0,\sqrt{\varepsilon|{\log\varepsilon}|+\mu}]$, which is a small solution of equation (11.17), and therefore of (11.5).
By Lemma 1 we have $\varepsilon\widetilde{l}=o(\varepsilon)$ and $\varepsilon\overline{l}=o(\varepsilon)$ as $\varepsilon+\mu\to0$, and therefore, since the small solution of (11.5) is unique, so is the fixed point provided by the Schauder–Tikhonov theorem. Hence $\widetilde{l}=o(1)$ and $\overline{l}=o(1)$, and therefore $\varepsilon\overline{l}=o(\varepsilon)$ as $\varepsilon+\mu\to0$ and the small vector $l_{\varepsilon,\mu,2}$ (11.1) satisfies Assumption 5.
§ 12. Construction of the asymptotic expansion for the defining vector in the singular case
12.1.
Note that by Lemma 1 the asymptotic equality (10.16) holds a fortiori for the vector $v_\omega=(\widetilde{l}, \widehat{\lambda}, \widehat{\rho})^\top $, whose component are uniquely defined by
Using (10.7), (10.10)–(10.15) and (12.1), we find, in particular, an asymptotic representation for the vectors $l_{\varepsilon,\mu,1}$ and $l_{\varepsilon,\mu,2}$ up to order $o(\varepsilon^2+\mu^2)$ as $\varepsilon+\mu\to 0$.
§ 13. Conclusions
A study of an optimal control problem on a fixed time interval for an autonomous linear system with two independent small positive parameters $\varepsilon$ and $\mu$ has been performed, where $\varepsilon$ multiplies some derivatives in the equations of the system and $\mu$ is involved in the initial conditions. The control is subject to smooth geometric constraints in the form of a ball. The quality functional is convex and terminal, and depends only on the values of the slow variables at the terminal time.
An asymptotic formula for the vector defining the optimal control as the small parameters tend independently to zero has been found.
We have also obtained full asymptotic expansions of the solution in the regular case, where the optimal control in the limiting problem is continuous, and in the singular case, with a singularity in the optimal control. It has been shown that in the regular case the solution expands in a power series in $\varepsilon$ and $\mu$, whereas in the singular case the asymptotic behaviours of the solution shows a more involved dependence on $\varepsilon$ and $\mu$ (in both cases, with respect to the standard gauge sequence $\varepsilon^k+\mu^k$) as $\varepsilon+\mu\to0$.
Bibliography
1.
L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The mathematical theory of optimal processes, Intersci. Publ. John Wiley & Sons, Inc., New York–London, 1962, viii+360 pp.
2.
N. N. Krasovskii, Theory of control of motion: Linear systems, Nauka, Moscow, 1968, 475 pp. (Russian)
3.
E. B. Lee and L. Markus, Foundations of optimal control theory, John Wiley & Sons, Inc., New York–London–Sydney, 1967, x+576 pp.
4.
A. R. Danilin and A. M. Il'in, “The asymptotics of the solution of a time-optimal problem with perturbed initial conditions”, J. Comput. Syst. Sci. Int., 33:6 (1995), 67–74
5.
A. R. Danilin and A. M. Il'in, “The structure of the solution of a perturbed time-optimal control problem”, Fundam. Prikl. Mat., 4:3 (1998), 905–926 (Russian)
6.
M. G. Dmitriev and G. A. Kurina, “Singular perturbations in control problems”, Autom. Remote Control, 67:1 (2006), 1–43
7.
G. A. Kurina and M. A. Kalashnikova, “Singularly perturbed problems with multi-tempo fast variables”, Autom. Remote Control, 83:11 (2022), 1679–1723
8.
È. M. Galeev and V. M. Tikhomirov, A short course on the theory of extremal problems, Moscow University, Moscow, 1989, 204 pp. (Russian)
9.
A. M. Il'in and O. O. Kovrizhnykh, “The asymptotic behavior of solutions to systems of linear equations with two small parameters”, Dokl. Math., 69:3 (2004), 336–337
10.
A. R. Danilin and O. O. Kovrizhnykh, “Time-optimal control of a small mass point without environmental resistance”, Dokl. Math., 88:1 (2013), 465–467
11.
P. V. Kokotovic and A. H. Haddad, “Controllability and time-optimal control of systems with slow and fast modes”, IEEE Trans. Automat. Control, 20:1 (1975), 111–113
12.
A. R. Danilin and A. A. Shaburov, “Asymptotics of solutions of linear singularly perturbed optimal control problems with a convex integral performance index and a cheap control”, Differ. Equ., 59:1 (2023), 87–102
13.
A. L. Dontchev, Perturbations, approximations and sensitivity analysis of optimal control systems, Lect. Notes Control Inf. Sci., 52, Springer-Verlag, Berlin, 1983, iv+158 pp.
14.
A. R. Danilin and O. O. Kovrizhnykh, “On the dependence of the time-optimality problem for a linear system on two small parameters”, Vestn. Chelyab. Gos. Univ. Mat. Mekh. Inform., 2011, no. 27, 46–60 (Russian)
15.
R. T. Rockafellar, Convex analysis, Princeton Math. Ser., 28, Princeton Univ. Press, Princeton, NJ, 1970, xviii+451 pp.
16.
A. R. Danilin and O. O. Kovrizhnykh, “Asymptotic expansion of the solution to an optimal control problem for a linear autonomous system with a terminal convex quality index depending on slow and fast variables”, Izv. Inst. Mat. Inform., Udmurt. Gos. Univ., 61 (2023), 42–56 (Russian)
17.
A. Erdélyi and M. Wyman, “The asymptotic evaluation of certain integrals”, Arch. Ration. Mech. Anal., 14:1 (1963), 217–260
18.
H. Cartan, Calcul différentiel, Hermann, Paris, 1967, 178 pp. ; Formes différentielles. Applications élémentaires au calcul des variations et à la théorie des courbes et des surfaces, Hermann, Paris, 1967, 186 pp.
19.
A. R. Danilin, “Asymptotics of the optimal value of the performance functional for a rapidly stabilizing indirect control in the regular case”, Differ. Equ., 42:11 (2006), 1545–1552
20.
A. R. Danilin, “Asymptotic behaviour of bounded controls for a singular elliptic problem in a domain with a small cavity”, Sb. Math., 189:11 (1998), 1611–1642
21.
A. M. Il'in and A. R. Danilin, Asymptotic methods in analysis, Fizmatlit, Moscow, 2009, 248 pp. (Russian)
22.
A. R. Danilin, “Asymptotic behavior of the optimal cost functional for a rapidly stabilizing indirect control in the singular case”, Comput. Math. Math. Phys., 46:12 (2006), 2068–2079
23.
L. V. Kantorovich and G. P. Akilov, Functional analysis, 3d ed., Nauka, Moscow, 1984, 752 pp. ; English transl of 2nd ed., Pergamon Press, Oxford–Elmsford, NY, 1982, xiv+589 pp.
Citation:
A. R. Danilin, O. O. Kovrizhnykh, “Asymptotics of a solution to a terminal control problem with two small parameters”, Sb. Math., 216:8 (2025), 1092–1120
\Bibitem{DanKov25}
\by A.~R.~Danilin, O.~O.~Kovrizhnykh
\paper Asymptotics of a~solution to a~terminal control problem with two small parameters
\jour Sb. Math.
\yr 2025
\vol 216
\issue 8
\pages 1092--1120
\mathnet{http://mi.mathnet.ru/eng/sm10072}
\crossref{https://doi.org/10.4213/sm10072e}
\mathscinet{https://mathscinet.ams.org/mathscinet-getitem?mr=4973730}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001601091700003}