Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2025, Volume 216, Issue 4, Pages 515–537
DOI: https://doi.org/10.4213/sm10109e
(Mi sm10109)
 

Optimal recovery of a solution of a system of linear differential equations from initial information given with a random error

I. S. Maksimovaa, K. Yu. Osipenkobc

a Faculty of Physical, Mathematical and Natural Sciences, RUDN University, Moscow, Russia
b Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia
c Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow, Russia
References:
Abstract: The problem of the optimal recovery of a solution of a system of linear differential equations from initial information containing a random error is considered. Optimal methods are searched for among all possible (not necessarily linear) recovery methods. Depending on the given variance of random errors, the optimal recovery methods constructed in the paper, which turn out to be linear, use only part of the available information.
Bibliography: 17 titles.
Keywords: optimal recovery, systems of linear differential equations, information with random errors.
Received: 19.04.2024 and 06.09.2024
Published: 17.06.2025
Bibliographic databases:
Document Type: Article
MSC: Primary 34A45, 41A65, 42A05; Secondary 41A46
Language: English
Original paper language: Russian

§ 1. Introduction

In the general statement of the recovery problem one must find the values of a fixed functional or operator acting on some classes of functions from incomplete information about these values. Classes are usually defined in terms of the smoothness or analyticity properties of functions forming them. Usually, local or individual information consists of some characteristic of a function which are available to us (the values of the function at some point, its moments, Fourier or Taylor coefficients, Fourier transform and so on). This information can be prescribed with an error, deterministic or random.

To this date, optimal methods have been found for various recovery problems in a considerable number of papers. Problems with deterministic errors were considered, for instance, in [1]–[9].

Problems with random errors were investigated in [10]–[16]. In [11] recovery methods were estimated on the basis of linear functionals, and in [12] an estimate for a nonlinear recovery method was found in terms of estimates for linear methods.

The problem of estimating the error of a method of recovery from a random variable with normal distribution was considered in [13], where inequalities for the minimax nonlinear risk were obtained.

For a solution of a system of linear homogeneous equations the recovery problem was considered in [17], but there the error in the data was deterministic.

This paper is concerned with the construction of optimal recovery methods for a solution of a system of linear homogeneous equations from initial information known with a random error. We reproduce the statement of the problem from [14] and use a number of ideas from there to prove our general result in the final-dimensional case. We consider various forms of the prescription of information: the problem is solved under the assumptions that the initial point lies in an ellipsoid and its coordinates at the initial moment of time are known with a random error. We must recover the solution at time $\tau > 0$. We also consider the problem when a solution is known with a random error at some moment of time $t=T_1$, and we must recover it at another moment of time $\tau$, $0<\tau< T_1$.

The general result is also applied to the problem of the recovery of the $k$th derivative of a trigonometric polynomial from its coefficients, which are known with a random error.

In these problems we do not limit ourselves to random variables with normal distribution. We consider arbitrary distributions of a random vector with fixed expectation and a fixed estimate for variance. As in problems with deterministic error, here we discover such phenomena as the linearity of the optimal method and the opportunity to use only part of the information available for measurements.

§ 2. Statement of the optimal recovery problem for a solution of a system of linear differential equations

Consider the Cauchy problem for a system of linear differential equations

$$ \begin{equation} \begin{cases} \dfrac{dx}{dt}=Ax,\\ x(0)=x_0, \end{cases} \end{equation} \tag{2.1} $$
where $x(t)\in\mathbb R^n$, $t\geqslant0$, and
$$ \begin{equation*} A=\begin{pmatrix} a_{11} & a_{12} &\dots & a_{1n}\\ \vdots & \vdots &\dots & \vdots\\ a_{n1} & a_{n2} &\dots & a_{nn} \end{pmatrix}, \qquad a_{ij}\in\mathbb R. \end{equation*} \notag $$

Assume that the matrix $A$ is selfadjoint, and let

$$ \begin{equation*} \mu_1,\ \mu_2,\ \dots,\ \mu_n \end{equation*} \notag $$
be its eigenvalues. Let $\{e_j\}_{j=1}^n$ denote the orthonormal basis of eigenvectors corresponding to the eigenvalues $\mu_j$, $j=1,\dots,n$. Let
$$ \begin{equation*} x_0=\sum_{j=1}^nx_je_j. \end{equation*} \notag $$
Then the solution of (2.1) can be expressed by
$$ \begin{equation*} x(t)=\sum_{j=1}^ne^{\mu_jt}x_je_j. \end{equation*} \notag $$

Assume that we know the coordinate of the initial point $x_0$ with a random error. Moreover, we know some ellipsoid containing the point $x_0$. We must recover the solution at time $\tau>0$.

We turn to the precise statement of the problem. For $x=(x_1,\dots,x_n)\in\mathbb R^n$ set

$$ \begin{equation*} W\,{=}\,\biggl\{\!x\,{\in}\,\mathbb R^n\colon \!\!\sum_{j=1}^n\nu_jx_j^2\,{\leqslant}\, 1\!\biggr\}, \quad Tx\,{=}\,(e^{\mu_1\tau}x_1,\dots,e^{\mu_n\tau}x_n)\quad\text{and} \quad Ix\,{=}\,(x_1,\dots,x_n). \end{equation*} \notag $$
Fix $\delta>0$, and for each $x\in W$ consider the set of random vectors
$$ \begin{equation*} Y_\delta(x)=\bigl\{y=(y_1,\dots,y_n)\colon \mathbf E(y)=Ix,\ \mathbf D(y_j)\leqslant\delta^2,\ j=1,\dots,n\bigr\}. \end{equation*} \notag $$
Let $l_2^n$ denote the space of vectors $x=(x_1,\dots,x_n)$ with the norm
$$ \begin{equation*} \|x\|_{l_2^n}=\biggl(\sum_{j=1}^n|x_j|^2\biggr)^{1/2}. \end{equation*} \notag $$
A recovery method assigns to a random vector $y\in Y_\delta(x)$ an element of $l_2^n$, which is regarded as an approximation of $Tx$. The error of a recovery method $ \varphi\colon\mathbb R^n\to l_2^n$ is the quantity
$$ \begin{equation*} e(T,W,I,\delta,\varphi)=\Bigl(\sup_{x\in W,\,y\in Y_\delta(x)}\mathbf E\bigl(\|Tx-\varphi(y)\|^2_{l_2^n}\bigr)\Bigr)^{1/2} \end{equation*} \notag $$
(we consider only measurable maps $\varphi$). The problem consists in finding the optimal recovery error
$$ \begin{equation*} E(T,W,I,\delta)=\inf_{\varphi\colon\mathbb R^n\to l_2^n}e(T,W,I,\delta,\varphi) \end{equation*} \notag $$
and a method delivering the infimum, which is called an optimal method.

To solve this problem we consider a more general problem, solve it and apply the result obtained to the original problem.

§ 3. General result

Let $X$ be a linear space, $Z$ be a normed linear space and $T\colon X\to Z$ be a linear operator. We must recover the values of $T$ on a certain set (class) $W\subset X$ from values of a linear operator $I\colon X\to\mathbb R^n$, which are given with a random error. For each $x\in W$ and $\delta>0$ we consider the set of random vectors

$$ \begin{equation*} Y_\delta(x)=\bigl\{y=(y_1,\dots,y_n)\colon \mathbf E(y)=Ix,\ \mathbf D(y_j)\leqslant\delta^2,\ j=1,\dots,n\bigr\} \end{equation*} \notag $$
and, similarly to § 2, define the error of a recovery method $\varphi\colon\mathbb R^n\to Z$ by
$$ \begin{equation*} e(T,W,I,\delta,\varphi)=\Bigl(\sup_{x\in W,\, y\in Y_\delta(x)}\mathbf E\bigl(\|Tx-\varphi(y)\|^2_Z\bigr)\Bigr)^{1/2} \end{equation*} \notag $$
(only measurable methods $\varphi$ are considered). The problem consists in finding an optimal recovery method (if it exists) and the optimal recovery error
$$ \begin{equation} E(T,W,I,\delta)=\inf_{\varphi\colon\mathbb R^n\to Z}e(T,W,I,\delta,\varphi). \end{equation} \tag{3.1} $$

Set

$$ \begin{equation*} W=\biggl\{x\in\mathbb R^n\colon \sum_{j=1}^n\nu_j|x_j|^2\leqslant1\biggr\}, \end{equation*} \notag $$
where $\nu_j>0$, $j=1,\dots,n$. Consider the linear operators $T\colon\mathbb R^n\to l_2^n$ and $I\colon\mathbb R^n\to \mathbb R^n$ defined by
$$ \begin{equation*} Tx=(\mu_1x_1,\dots,\mu_nx_n)\quad\text{and} \quad Ix=(x_1,\dots,x_n), \end{equation*} \notag $$
$|\mu_j|>0$, $j=1,\dots,n$.

We introduce the notation

$$ \begin{equation*} \gamma_j=\frac{\sqrt{\nu_j}}{|\mu_j|}, \quad j=1,\dots,n,\quad\text{and} \quad \xi_j=\biggl(\sum_{k=1}^j\nu_k\biggl(\frac{\gamma_j}{\gamma_k}-1\biggr)\biggr)^{1/2}, \quad j=1,\dots, n. \end{equation*} \notag $$
Let $\gamma_1\leqslant\dots\leqslant\gamma_n$. Then it is easy to see that $0=\xi_1\leqslant\dots\leqslant\xi_n$.

Theorem 1. Let $1/\delta\in(\xi_s,\xi_{s+1}]$ for some $1\leqslant s\leqslant n-1$ or $1/\delta\in(\xi_n,+\infty)$ (and then set $s=n$). Then

$$ \begin{equation*} E(T,W,I,\delta)=\delta\biggl(\sum_{k=1}^s |\mu_k|^2\biggl(1-\frac{\gamma_k(1-c_1)}{\gamma_1}\biggr)\biggr)^{1/2}, \end{equation*} \notag $$
where
$$ \begin{equation} c_1=1-\frac{\delta^2\gamma_1\sum_{k=1}^s({\nu_k}/{\gamma_k})} {1+\delta^2\sum_{k=1}^s\nu_k}, \end{equation} \tag{3.2} $$
and the method
$$ \begin{equation*} \varphi(y)=\sum_{k=1}^s\biggl(1-\frac{\gamma_k(1-c_1)}{\gamma_1}\biggr) \mu_ky_ke_k, \end{equation*} \notag $$
where $\{e_k\}$ is the standard basis in $l_2^n$, is optimal.

Proof. 1. A lower bound. Fix some $\tau = (\tau_1,\dots,\tau_n) \in W$ such that
$$ \begin{equation*} \tau_1\geqslant\dots\geqslant\tau_n>0. \end{equation*} \notag $$
Consider the set
$$ \begin{equation*} B=\{x\in l_2^n\colon \ x_j=\pm\tau_j,\ j=1,\dots,n\}. \end{equation*} \notag $$
It is obvious that $B\subset W$. Set
$$ \begin{equation*} p_j=\frac{\delta^2}{\delta^2+\tau_j^2}, \qquad j=1,\dots,n. \end{equation*} \notag $$
By the monotonicity condition on the $\tau_j$ we have
$$ \begin{equation*} 0<p_1\leqslant\dots\leqslant p_n<1. \end{equation*} \notag $$

An arbitrary $x\in B$ is expressed in the form

$$ \begin{equation*} x=\sum_{j=1}^ns_j(x)\tau_je_j, \end{equation*} \notag $$
where $s_j(x)\in\{-1,1\}$. We define the distribution $\eta(x)$ for $x\in B$ by
$$ \begin{equation*} \eta(x)= \begin{cases} 0 &\text{with probability }p_1, \\ \dfrac{s_1(x)\tau_1}{1-p_1}\, e_1&\text{with probability }p_2-p_1, \\ \displaystyle\sum_{j=1}^2\dfrac{s_j(x)\tau_j}{1-p_j}\, e_j&\text{with probability }p_3-p_2, \\ \dots\dots\dots&\dots\dots\dots \\ \displaystyle\sum_{j=1}^{n-1}\dfrac{s_j(x)\tau_j}{1-p_j} \, e_j&\text{with probability }p_n-p_{n-1}, \\ \displaystyle\sum_{j=1}^n\dfrac{s_j(x)\tau_j}{1-p_j} \, e_j&\text{with probability }1-p_n. \end{cases} \end{equation*} \notag $$
Thus, the components of $\eta(x)=(\eta_1(x),\dots,\eta_n(x))$ has the following distributions:
$$ \begin{equation*} \eta_j(x)=\begin{cases} 0&\text{with probability }p_j, \\ \dfrac{s_j(x)\tau_j}{1-p_j}&\text{with probability }1-p_j, \end{cases} \qquad j=0,1,\dots,n. \end{equation*} \notag $$
It is easy to see that $\mathbf E(\eta_j(x))=x_j$, $j=1,\dots,n$. In addition,
$$ \begin{equation*} \mathbf D(\eta_j(x))=(1-p_j)\dfrac{\tau^2_j}{(1-p_j)^2}-\tau_j^2=\delta^2, \qquad j=1,\dots,n. \end{equation*} \notag $$
Therefore, $\eta(x)\in Y_\delta(x)$ for $x\in B$.

Let $\varphi$ be a recovery method. Since the cardinality of $B$ is $2^n$, we obtain

$$ \begin{equation} \begin{aligned} \, \notag e^2(T,W,I,\delta,\varphi) &\geqslant\sup_{x\in B}\mathbf E\|Tx-\varphi(\eta(x))\|_{l_2^n}^2 \\ \notag &=\sup_{x\in B}\biggl(\sum_{j=1}^{n+1}(p_j-p_{j-1})\biggl\|Tx-\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k(x)\tau_k}{1-p_k}e_k\biggr)\biggr\|_{l_2^n}^2\biggr) \\ \notag &\geqslant\frac1{2^n}\sum_{x\in B}\biggl(\sum_{j=1}^{n+1}(p_j-p_{j-1})\biggl\|Tx-\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k(x)\tau_k}{1-p_k}e_k\biggr)\biggr\|_{l_2^n}^2\biggr) \\ &=\frac1{2^n}\sum_{j=1}^{n+1}(p_j-p_{j-1})\sum_{x\in B}\biggl\|Tx-\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k(x)\tau_k}{1-p_k}e_k\biggr)\biggr\|_{l_2^n}^2; \end{aligned} \end{equation} \tag{3.3} $$
here $p_0=0$ and $p_{n+1}=1$. Set
$$ \begin{equation*} B_{s_1,\dots,s_{j-1}}=\{x\in B\colon s_1(x)=s_1,\dots,s_{j-1}(x)=s_{j-1}\}, \end{equation*} \notag $$
$j=1,\dots,n+1$ (for $j=1$ this set coincides with $B$). Then
$$ \begin{equation*} \begin{aligned} \, &\frac{p_j-p_{j-1}}{2^n}\sum_{x\in B}\biggl\|Tx-\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k(x)\tau_k}{1-p_k}e_k\biggr)\biggr\|_{l_2^n}^2 \\ &\qquad =\frac{p_j-p_{j-1}}{2^n}\sum_{s_1,\dots,s_{j-1}} \,\sum_{x\in B_{s_1,\dots,s_{j-1}}}\biggl\|Tx-\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k\tau_k}{1-p_k}e_k\biggr)\biggr\|_{l_2^n}^2. \end{aligned} \end{equation*} \notag $$
If $x\in B_{s_1,\dots,s_{j-1}}$, then
$$ \begin{equation*} x=\sum_{k=1}^{j-1}s_k\tau_ke_k+z(x), \quad\text{where } z(x)=\sum_{k=j}^ns_k(x)\tau_ke_k. \end{equation*} \notag $$
Moreover, together with each element
$$ \begin{equation*} \sum_{k=1}^{j-1}s_k\tau_ke_k+z(x)\in B_{s_1,\dots,s_{j-1}} \end{equation*} \notag $$
the set $B_{s_1,\dots,s_{j-1}}$ contains also the element
$$ \begin{equation*} \sum_{k=1}^{j-1}s_k\tau_ke_k-z(x). \end{equation*} \notag $$
Thus,
$$ \begin{equation*} \begin{aligned} \, &\frac{p_j-p_{j-1}}{2^n}\sum_{s_1,\dots,s_{j-1}}\,\sum_{x\in B_{s_1,\dots,s_{j-1}}}\biggl\|Tx-\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k\tau_k}{1-p_k}\, e_k\biggr)\biggr\|_{l_2^n}^2 \\ &=\frac{p_j-p_{j-1}}{2^n}\sum_{s_1,\dots,s_{j-1}}\,\sum_{x\in B_{s_1,\dots,s_{j-1}}}\biggl\|T\biggl(\sum_{k=1}^{j-1}s_k\tau_ke_k+z(x)\biggr) -\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k\tau_k}{1-p_k}\, e_k\biggr)\biggr\|_{l_2^n}^2 \\ &=\frac{p_j-p_{j-1}}{2^n}\sum_{s_1,\dots,s_{j-1}}\,\sum_{x\in B_{s_1,\dots,s_{j-1}}}\biggl\|T\biggl(\sum_{k=1}^{j-1}s_k\tau_ke_k\biggr)\,{+}\,Tz(x) \,{-}\,\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k\tau_k}{1-p_k}\, e_k\!\biggr)\biggr\|_{l_2^n}^2 \\ &=\frac{p_j-p_{j-1}}{2^{n+1}}\sum_{s_1,\dots,s_{j-1}}\,\sum_{x\in B_{s_1,\dots,s_{j-1}}}\biggl(\biggl\|T\biggl(\sum_{k=1}^{j-1}s_k\tau_ke_k\biggr)+Tz(x) \\ &\quad -\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k\tau_k}{1-p_k}\, e_k\biggr)\biggr\|_{l_2^n}^2+ \biggl\|T\biggl(\sum_{k=1}^{j-1}s_k\tau_ke_k\biggr)-Tz(x)-\varphi\biggl(\sum_{k=1}^{j-1} \dfrac{s_k\tau_k}{1-p_k}\, e_k\biggr)\biggr\|_{l_2^n}^2\biggr) \\ &\geqslant\frac{p_j-p_{j-1}}{2^n}\sum_{s_1,\dots,s_{j-1}}\,\sum_{x\in B_{s_1,\dots,s_{j-1}}}\|Tz(x)\|_{l_2^n}^2 =\frac{p_j-p_{j-1}}{2^n}\sum_{x\in B}\|Tz(x)\|_{l_2^n}^2 \\ &=(p_j-p_{j-1})\sum_{k=j}^n|\mu_k|^2\tau_k^2. \end{aligned} \end{equation*} \notag $$
Substituting this into (3.3) we obtain
$$ \begin{equation*} \begin{aligned} \, &e^2(T,W,I,\delta,\varphi)\geqslant\sum_{j=1}^{n+1}(p_j-p_{j-1})\sum_{k=j}^n|\mu_k|^2\tau_k^2 \\ &\quad =\sum_{j=1}^n\biggl(p_j\sum_{k=j}^n|\mu_k|^2\tau_k^2- p_j\sum_{k=j+1}^n|\mu_k|^2\tau_k^2\biggr) =\sum_{j=1}^np_j|\mu_j|^2\tau_j^2=\sum_{j=1}^n\frac{\delta^2}{\delta^2+\tau_j^2} |\mu_j|^2\tau_j^2. \end{aligned} \end{equation*} \notag $$
Since the method $\varphi$ can be arbitrary, we have the estimate
$$ \begin{equation} E^2(T,W,I,\delta)\geqslant\sup_{\substack{\tau\in W\\ \tau_1\geqslant\dots\geqslant\tau_n>0}} \sum_{j=1}^n\frac{\delta^2}{\delta^2+\tau_j^2}|\mu_j|^2\tau_j^2. \end{equation} \tag{3.4} $$

Consider a vector $\tau=(\tau_1,\dots,\tau_k,0,\dots,0)\in W$ such that $\tau_1\geqslant\dots\geqslant\tau_k>0$, $1\leqslant k<n$. For sufficiently small $\varepsilon>0$ set $\tau_\varepsilon=(\tau_1(\varepsilon),\dots,\tau_n(\varepsilon)) $, where

$$ \begin{equation*} \begin{gathered} \, \tau_j(\varepsilon)= \begin{cases} \sqrt{\tau_j^2-\varepsilon},&1\leqslant j\leqslant k, \\ C\sqrt{\varepsilon},&k+1\leqslant j\leqslant n, \end{cases} \\ C=\biggl(\frac{\sum_{j=1}^k\nu_j}{\sum_{j=k+1}^n\nu_j}\biggr)^{1/2}. \end{gathered} \end{equation*} \notag $$
Then
$$ \begin{equation*} \sum_{j=1}^n\nu_j\tau_j^2(\varepsilon) =\sum_{j=1}^k\nu_j\tau_j^2-\varepsilon\sum_{j=1}^k\nu_j+ C^2\varepsilon\sum_{j=k+1}^n\nu_j =\sum_{j=1}^k\nu_j\tau_j^2\leqslant1. \end{equation*} \notag $$
Thus, $\tau_\varepsilon\in W$. For $\varepsilon<\tau_k^2/(1+C^2)$ we have
$$ \begin{equation*} \sqrt{\tau_k^2-\varepsilon}>C\sqrt\varepsilon. \end{equation*} \notag $$
Hence for such $\varepsilon$
$$ \begin{equation*} \tau_1(\varepsilon)\geqslant\dots\geqslant\tau_n(\varepsilon)>0. \end{equation*} \notag $$
It follows from (3.4) that
$$ \begin{equation*} E^2(T,W,I,\delta)\geqslant\sum_{j=1}^n\frac{\delta^2}{\delta^2+\tau_j^2(\varepsilon)} |\mu_j|^2\tau_j^2(\varepsilon). \end{equation*} \notag $$
Taking the limit as $\varepsilon\to0$ we obtain
$$ \begin{equation*} E^2(T,W,I,\delta)\geqslant\sum_{j=1}^k\frac{\delta^2}{\delta^2+\tau_j^2} |\mu_j|^2\tau_j^2. \end{equation*} \notag $$
Thus,
$$ \begin{equation} E^2(T,W,I,\delta)\geqslant\sup_{\substack{\tau\in W\\\tau_1\geqslant\dots\geqslant\tau_n\geqslant0}} \sum_{j=1}^n\frac{\delta^2}{\delta^2+\tau_j^2}|\mu_j|^2\tau_j^2. \end{equation} \tag{3.5} $$

2. An upper bound. We find the error of methods of the form

$$ \begin{equation*} \varphi(y)=\sum_{j=1}^n\alpha_jy_je_j. \end{equation*} \notag $$
Set $z(x)=y(x)-Ix$. Then $\mathbf E(z(x))=0$ and $\mathbf D(z_j(x))\leqslant\delta^2$, $j=1,\dots,n$. We have
$$ \begin{equation*} \begin{aligned} \, &e^2(T,W,I,\delta,\varphi)=\sup_{\substack{x\in W\\y(x)\in Y_\delta(x)}}\mathbf E\bigl(\|Tx-\varphi(y(x))\|^2_{l_2^n}\bigr) \\ &\qquad =\sup_{\substack{x\in W\\y(x)\in Y_\delta(x)}}\mathbf E\bigl(\|Tx-\varphi(Ix)-\varphi(z(x))\|^2_{l_2^n}\bigr) \\ &\qquad =\sup_{\substack{x\in W\\y(x)\in Y_\delta(x)}}\bigl(\|Tx-\varphi(Ix)\|^2_{l_2^n} +\mathbf E(\|\varphi(z(x))\|^2_{l_2^n})-2\mathbf E(\varphi(z(x)),Tx-\varphi(Ix))\bigr); \end{aligned} \end{equation*} \notag $$
here $(\,\cdot\,{,}\,\cdot\,)$ is the inner product in $l_2^n$. It follows from the form of $\varphi$ that
$$ \begin{equation*} \begin{aligned} \, \mathbf E(\varphi(z(x)),Tx-\varphi(Ix)) &=\mathbf E\biggl( \sum_{j=1}^n\alpha_jz_j(x)e_j,Tx-\varphi(Ix)\biggr) \\ &=\sum_{j=1}^n(e_j,Tx-\varphi(Ix))\alpha_j\mathbf E(z_j(x))=0. \end{aligned} \end{equation*} \notag $$
Since
$$ \begin{equation*} \mathbf E(\|\varphi(z(x))\|^2_{l_2^n})=\mathbf E\biggl(\sum_{j=1}^n|\alpha_j|^2|z_j(x)|^2\biggr)=\sum_{j=1}^n|\alpha_j|^2\mathbf D(z_j(x)), \end{equation*} \notag $$
we have
$$ \begin{equation*} \begin{aligned} \, e^2(T,W,I,\delta,\varphi) &=\sup_{\substack{x\in W\\y(x)\in Y_\delta(x)}}\biggl(\|Tx-\varphi(Ix)\|^2_{l_2^n}+\sum_{j=1}^n|\alpha_j|^2\mathbf D(z_j(x))\biggr) \\ &=\sup_{x\in W}\|Tx-\varphi(Ix)\|^2_{l_2^n}+\delta^2\sum_{j=1}^n|\alpha_j|^2. \end{aligned} \end{equation*} \notag $$

Consider the extremal problem

$$ \begin{equation*} \|Tx-\varphi(Ix)\|^2_{l_2^n}\to\max, \qquad x\in W. \end{equation*} \notag $$
We can write it as
$$ \begin{equation*} \sum_{j=1}^n|\mu_j-\alpha_j|^2|x_j|^2\to\max, \qquad\sum_{j=1}^n\nu_j|x_j|^2\leqslant1. \end{equation*} \notag $$
From the inequality
$$ \begin{equation*} \begin{aligned} \, \sum_{j=1}^n|\mu_j-\alpha_j|^2|x_j|^2 &=\sum_{j=1}^n\frac{|\mu_j-\alpha_j|^2}{\nu_j}\nu_j|x_j|^2 \\ &\leqslant\max\biggl\{\frac{|\mu_1-\alpha_1|^2}{\nu_1},\dots, \frac{|\mu_n-\alpha_n|^2}{\nu_n}\biggr\}\sum_{j=1}^n\nu_j|x_j|^2 \end{aligned} \end{equation*} \notag $$
we deduce that
$$ \begin{equation*} \sup_{x\in W}\|Tx-\varphi(Ix)\|^2_{l_2^n}\leqslant\max\biggl\{\frac{|\mu_1-\alpha_1|^2}{\nu_1},\dots, \frac{|\mu_n-\alpha_n|^2}{\nu_n}\biggr\}. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)\leqslant\max\biggl\{\frac{|\mu_1-\alpha_1|^2}{\nu_1},\dots, \frac{|\mu_n-\alpha_n|^2}{\nu_n}\biggr\}+\delta^2\sum_{j=1}^n|\alpha_j|^2. \end{equation*} \notag $$
Set
$$ \begin{equation*} c_j=\frac{\alpha_j}{\mu_j}, \qquad j=1,\dots,n. \end{equation*} \notag $$
Then the error of the method $\varphi$ satisfies the inequality
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)\leqslant\max\biggl\{\frac{|1-c_1|^2}{\gamma_1^2},\dots, \frac{|1-c_n|^2}{\gamma_n^2}\biggr\}+\delta^2\sum_{j=1}^n|\mu_j|^2|c_j|^2. \end{equation*} \notag $$

2.1. Let $1/\delta\in(\xi_s,\xi_{s+1}]$ for some $1\leqslant s\leqslant n-1$. Then it is easy to show that

$$ \begin{equation*} \frac1{\gamma_{s+1}}\leqslant\frac{\delta^2\sum_{k=1}^s({\nu_k}/{\gamma_k})} {1+\delta^2\sum_{k=1}^s\nu_k}<\frac1{\gamma_s}. \end{equation*} \notag $$
Defining $c_1$ by (3.2) we obtain
$$ \begin{equation*} \frac1{\gamma_{s+1}}\leqslant\frac{1-c_1}{\gamma_1}<\frac1{\gamma_s}. \end{equation*} \notag $$
Let
$$ \begin{equation*} c_k=1-\gamma_k\frac{1-c_1}{\gamma_1}, \quad k=2,\dots,s, \qquad c_k=0, \quad k=s+1,\dots,n. \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \frac{(1-c_k)^2}{\gamma_k^2}=\frac{(1-c_1)^2}{\gamma_1^2}, \qquad k=2,\dots,s. \end{equation*} \notag $$
For $k\geqslant s+1$
$$ \begin{equation*} \frac{(1-c_k)^2}{\gamma_k^2}=\frac1{\gamma_k^2}\leqslant\frac1{\gamma_{s+1}^2}\leqslant \frac{(1-c_1)^2}{\gamma_1^2}. \end{equation*} \notag $$
Therefore,
$$ \begin{equation} \max\biggl\{\frac{|1-c_1|^2}{\gamma_1^2},\dots, \frac{|1-c_n|^2}{\gamma_n^2}\biggr\}=\frac{(1-c_1)^2}{\gamma_1^2}. \end{equation} \tag{3.6} $$
Hence
$$ \begin{equation} \begin{aligned} \, \notag &e^2(T,W,I,\delta,\varphi)\leqslant\frac{(1-c_1)^2}{\gamma_1^2}+ \delta^2\sum_{k=1}^s|\mu_k|^2c_k^2 \\ \notag &\qquad =\frac{(1-c_1)^2}{\gamma_1^2} +\delta^2\sum_{k=1}^s|\mu_k|^2((1-c_k)^2-(1-c_k)+c_k) \\ \notag &\qquad=\frac{(1-c_1)^2}{\gamma_1^2}+\delta^2 \sum_{k=1}^s|\mu_k|^2\frac{\gamma_k^2}{\gamma_1^2}(1-c_1)^2 \\ \notag &\qquad\qquad -\delta^2\sum_{k=1}^s|\mu_k|^2\frac{\gamma_k}{\gamma_1}(1-c_1)+\delta^2\sum_{k=1}^s|\mu_k|^2c_k \\ \notag &\qquad =\delta^2\sum_{k=1}^s|\mu_k|^2c_k +\frac{1-c_1}{\gamma_1^2}\biggl(1-c_1+\delta^2\sum_{k=1}^s\nu_k(1-c_1)- \delta^2\gamma_1\sum_{k=1}^s\frac{\nu_k}{\gamma_k}\biggr) \\ \notag &\qquad =\delta^2\sum_{k=1}^s|\mu_k|^2c_k +\frac{1-c_1}{\gamma_1^2}\biggl((1-c_1)\biggl(1+\delta^2\sum_{k=1}^s\nu_k\biggr)- \delta^2\gamma_1\sum_{k=1}^s\frac{\nu_k}{\gamma_k}\biggr) \\ &\qquad =\delta^2\sum_{k=1}^s|\mu_k|^2c_k. \end{aligned} \end{equation} \tag{3.7} $$

Consider a vector $\widehat\tau\in l_2^n$ such that

$$ \begin{equation*} \widehat\tau_k^2=\delta^2\biggl(\frac{\gamma_1}{(1-c_1)\gamma_k}-1\biggr), \quad k=1,\dots,s, \qquad\widehat\tau_k=0, \quad k=s+1,\dots,n. \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \begin{aligned} \, \sum_{k=1}^n\nu_k\widehat\tau_k^2 &=\delta^2\sum_{k=1}^s\nu_k \biggl(\frac{\gamma_1}{(1-c_1)\gamma_k}-1\biggr)=\frac{\delta^2\gamma_1}{1-c_1} \sum_{k=1}^s\frac{\nu_k}{\gamma_k}-\delta^2\sum_{k=1}^s\nu_k \\ &=\biggl(1+\delta^2\sum_{k=1}^s\nu_k\biggr)-\delta^2\sum_{k=1}^s\nu_k=1. \end{aligned} \end{equation*} \notag $$
Thus, $\widehat\tau\in W$. Substituting $\widehat\tau$ into (3.5) we obtain
$$ \begin{equation*} \begin{aligned} \, E^2(T,W,I,\delta) &\geqslant\sum_{k=1}^s\frac{\delta^2|\mu_k|^2\widehat\tau_k^2} {\delta^2+\widehat\tau_k^2}= \sum_{k=1}^s\frac{\delta^4|\mu_k|^2({\gamma_1}/((1-c_1)\gamma_k)-1)} {\delta^2\gamma_1/((1-c_1)\gamma_k)} \\ &=\delta^2\sum_{k=1}^s|\mu_k|^2\biggl(1-\frac{(1-c_1)\gamma_k}{\gamma_1}\biggr) =\delta^2\sum_{k=1}^s| \mu_k|^2c_k\geqslant e^2(T,W,I,\delta,\varphi). \end{aligned} \end{equation*} \notag $$
Hence $\varphi$ is an optimal method.

2.2. Now let $1/\delta>\xi_n$. Then

$$ \begin{equation*} \frac{\delta^2\sum_{k=1}^n({\nu_k}/{\gamma_k})} {1+\delta^2\sum_{k=1}^n\nu_k}<\frac1{\gamma_n}. \end{equation*} \notag $$
Set
$$ \begin{equation*} c_1=1-\gamma_1\frac{e\delta^2\sum_{k=1}^n({\nu_k}/{\gamma_k})} {1+\delta^2\sum_{k=1}^n\nu_k}. \end{equation*} \notag $$
Then
$$ \begin{equation*} \frac{1-c_1}{\gamma_1}<\frac1{\gamma_n}. \end{equation*} \notag $$
Let
$$ \begin{equation*} c_k=1-\gamma_k\frac{1-c_1}{\gamma_1}, \qquad k=2,\dots,n. \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \frac{(1-c_k)^2}{\gamma_k^2}=\frac{(1-c_1)^2}{\gamma_1^2}, \qquad k=2,\dots,n. \end{equation*} \notag $$
Thus, as in the previous case, we have equality (3.6). Repeating the calculations in (3.7) for $s=n$, we obtain
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)\leqslant\delta^2\sum_{k=1}^n|\mu_k|^2c_k. \end{equation*} \notag $$

Consider the vector $\widehat\tau\in l_2^n$ of the form

$$ \begin{equation*} \widehat\tau_k^2=\delta^2\biggl(\frac{\gamma_1}{(1-c_1)\gamma_k}-1\biggr), \qquad k=1,\dots,n. \end{equation*} \notag $$
Then
$$ \begin{equation*} \begin{aligned} \, \sum_{k=1}^n\nu_k\widehat\tau_k^2 &=\delta^2\sum_{k=1}^n\nu_k \biggl(\frac{\gamma_1}{(1-c_1)\gamma_k}-1\biggr)=\frac{\delta^2\gamma_1}{1-c_1} \sum_{k=1}^n\frac{\nu_k}{\gamma_k}-\delta^2\sum_{k=1}^n\nu_k \\ &=\biggl(1+\delta^2\sum_{k=1}^n\nu_k\biggr)-\delta^2\sum_{k=1}^n\nu_k=1. \end{aligned} \end{equation*} \notag $$
Thus, $\widehat\tau\in W$. Plugging $\widehat\tau$ into (3.5) we have
$$ \begin{equation*} \begin{aligned} \, E^2(T,W,I,\delta) &\geqslant\sum_{k=1}^n\frac{\delta^2|\mu_k|^2\widehat\tau_k^2} {\delta^2+\widehat\tau_k^2}= \sum_{k=1}^n\frac{\delta^4|\mu_k|^2 ({\gamma_1}/((1-c_1)\gamma_k)-1)}{\delta^2\gamma_1/((1-c_1)\gamma_k)} \\ & =\delta^2\sum_{k=1}^n|\mu_k|^2\biggl(1-\frac{(1-c_1)\gamma_k}{\gamma_1}\biggr)= \delta^2\sum_{k=1}^n|\mu_k|^2c_k\geqslant e^2(T,W,I,\delta,\varphi). \end{aligned} \end{equation*} \notag $$
Hence $\varphi$ is an optimal method.

Theorem 1 is proved.

§ 4. Optimal recovery of a solution of a system of linear differential equations

Here we present the solution of the recovery problem for an initial point in an ellipsoid and then consider the special case of a ball. After that we consider the problem of recovery from the coefficients at time $T$ and for the terminal point in an ellipsoid. Next we discuss the special case of a ball.

4.1. Recovering solutions of a linear differential equations from information with a random error for the initial moment of time

Consider the Cauchy problem for a system of homogeneous linear differential equations

$$ \begin{equation} \begin{cases} \dfrac{dx}{dt}=Ax, \\ x(0)=x_0, \end{cases} \end{equation} \tag{4.1} $$
where $x(t)\in\mathbb R^n$, $t\geqslant0$, and $A=(a_{ij}) \in \mathbb R$.

Let $A$ be a selfadjoint matrix and

$$ \begin{equation*} \mu_1> \mu_2> \dots >\mu_n \end{equation*} \notag $$
be its eigenvalues. Let $\{e_{j}\}_{j=1}^{n}$ denote the orthonormal basis of eigenvectors corresponding to the eigenvalues $\mu_j$, $j=1,\dots,n$.

Let

$$ \begin{equation*} x_0=\sum_{j=1}^n x_{j}e_{j}. \end{equation*} \notag $$
Then the solution of (4.1) can be expressed in the form
$$ \begin{equation*} x(t)=\sum_{j=1}^ne^{\mu_jt}x_{j}e_{j}. \end{equation*} \notag $$

Assume that we know the coordinates of the initial point $x_0$ with a random error. We also know an ellipsoid containing $x_0$. We must recover the solution at time $\tau>0$.

For $x=(x_1,\dots,x_n)\in\mathbb R^n$ set

$$ \begin{equation*} W\,{=}\,\biggl\{\!x\,{\in}\,\mathbb R^n\colon \!\!\sum_{j=1}^n\nu_jx_j^2\,{\leqslant}\,1\!\biggr\}, \quad Tx\,{=}\,(e^{\mu_1\tau}x_1,\dots,e^{\mu_n\tau}x_n)\quad\text{and} \quad Ix\,{=}\,(x_1,\dots,x_n). \end{equation*} \notag $$

As in the general statement, a recovery method assigns to a random vector $y\in Y_\delta(x)$ an element of $\mathbb R^n$ viewed as an approximation of $Tx$. Thus, the present recovery problem reduces to the one considered above. We use Theorem 1.

Set

$$ \begin{equation*} \gamma_{j}=\frac{\sqrt{\nu_{j}}}{e^{\mu_j\tau}}, \quad j=1,\dots,n,\quad\text{and} \quad\xi_{j}=\biggl(\sum_{k=1}^j \nu_{k}\biggl(\frac{\gamma_{j}}{\gamma_{k}}-1\biggr)\biggr)^{1/2}, \quad j=1,\dots,n. \end{equation*} \notag $$

Theorem 2. Let $1/\delta\in(\xi_{s},\xi_{s+1}]$ for some $1\leqslant s\leqslant n-1$, or let $1/\delta\in(\xi_{n},+\infty)$ (and then set $s=n$). Then

$$ \begin{equation*} E(T,W,I,\delta)=\delta\biggl(\sum_{k=1}^s e^{2\mu_k\tau} \biggl(1-\frac{\gamma_{k}(1-c_{1})}{\gamma_{1}}\biggr)\biggr)^{1/2}, \end{equation*} \notag $$
where
$$ \begin{equation} c_{1}=1-\frac{\delta^2\gamma_{1}\sum_{k=1}^s({\nu_{k}}/{\gamma_{k}})}{1+\delta^2\sum_{k=1}^s\nu_{k}}, \end{equation} \tag{4.2} $$
and the method
$$ \begin{equation*} \varphi(y)=\sum_{k=1}^s \biggl(1-\frac{\gamma_{k}(1-c_{1})}{\gamma_{1}}\biggr)e^{\mu_{k}\tau} y_{k}e_{k}, \end{equation*} \notag $$
is optimal.

Let $A$ be a selfadjoint matrix,

$$ \begin{equation*} \lambda_1>\lambda_2>\dots>\lambda_m \end{equation*} \notag $$
be its eigenvalues and $r_k$ be the multiplicity of the eigenvalue $\lambda_k$, $k=1,\dots,m$. Let $\{e_{kj}\}_{j=1}^{r_k}$ denote an orthonormal system of vectors corresponding to the eigenvalue $\lambda_k$. Then
$$ \begin{equation*} e_{11},\ \dots,\ e_{1r_1},\ \dots,\ e_{m1},\ \dots,\ e_{mr_m} \end{equation*} \notag $$
is an orthonormal basis of $\mathbb R^n$.

Let

$$ \begin{equation*} x_0=\sum_{k=1}^m \sum_{j=1}^{r_k}c_{kj}e_{kj}. \end{equation*} \notag $$
Then the solution of (4.1) has the expression
$$ \begin{equation*} x(t)=\sum_{k=1}^me^{\lambda_kt}\sum_{j=1}^{r_k}c_{kj}e_{kj}. \end{equation*} \notag $$

Now assume that at the initial moment of time the point $x_0$ lies in a ball of radius $R$:

$$ \begin{equation*} \sum_{k=1}^m\sum_{j=1}^{r_k} x_{kj}^2 \leqslant R^2. \end{equation*} \notag $$
Then the problem of the recovery of the solution at time $\tau>0$ reduces to the previous problem for
$$ \begin{equation*} \begin{gathered} \, W=\biggl\{x\in l_2^n\colon \sum_{k=1}^m \sum_{j=1}^{r_k} R^{-2}x_{kj}^2\leqslant1\biggr\}, \\ Tx=(e^{\lambda_1\tau}x_{11},\dots,e^{\lambda_1\tau}x_{1r_1}, \dots,e^{\lambda_m\tau}x_{m1},\dots,e^{\lambda_m\tau}x_{mr_m}) \end{gathered} \end{equation*} \notag $$
and
$$ \begin{equation*} Ix=(x_{11},\dots,x_{1r_1},\dots, x_{m1},\dots,x_{mr_m}). \end{equation*} \notag $$

Set

$$ \begin{equation*} \xi_{k}=R^{-1}\biggl(\sum_{j=1}^k r_j\bigl(e^{(\lambda_j-\lambda_k)\tau}-1\bigr)\biggr)^{1/2}, \qquad k=1,\dots,m. \end{equation*} \notag $$

Theorem 3. Let $1/\delta\in(\xi_{s},\xi_{s+1}]$ for some $1\leqslant s\leqslant m-1$, or let $1/\delta\in(\xi_{m},+\infty)$ (and then set $ s=m$). Then

$$ \begin{equation*} E(T,W,I,\delta)=\delta\biggl(\sum_{k=1}^s e^{2\lambda_k\tau} r_k \bigl(1-e^{(\lambda_1-\lambda_k)\tau}(1-c_{1})\bigr)\biggr)^{1/2}, \end{equation*} \notag $$
where
$$ \begin{equation} c_{1}=1-\frac{\delta^2R^{-2}e^{-\lambda_1\tau}\sum_{k=1}^s r_ke^{\lambda_k\tau}} {1+\delta^2R^{-2}\sum_{k=1}^sr_k}, \end{equation} \tag{4.3} $$
and the method
$$ \begin{equation*} \varphi(y)=\sum_{k=1}^s \bigl(e^{\lambda_k\tau}-e^{\lambda_1\tau}(1-c_{1})\bigr)\sum_{j=1}^{r_k} y_{kj}e_{kj}, \end{equation*} \notag $$
is optimal.

4.2. Recovering solutions of linear differential equations from information with a random error for time $T_1$

Consider the Cauchy problem for a system of homogeneous linear differential equations

$$ \begin{equation} \begin{cases} \dfrac{dx}{dt}=Ax, \\ x(0)=x_0, \end{cases} \end{equation} \tag{4.4} $$
where $x(t)\in\mathbb R^n$, $t\geqslant0$, and $A=(a_{ij})$, $a_{ij} \in \mathbb R$.

Let $A$ be a selfadjoint matrix and

$$ \begin{equation*} \lambda_1 > \lambda_2 > \dots > \lambda_n \end{equation*} \notag $$
be its eigenvalues. Let $\{e_{j}\}_{j=1}^{n}$ denote the orthonormal basis of eigenvectors corresponding to the eigenvalues $\lambda_j$, $j=1,\dots,n$.

Let

$$ \begin{equation*} x_0=\sum_{j=1}^n x_{j}e_{j}. \end{equation*} \notag $$
Then the solution of (4.4) has the expression
$$ \begin{equation*} x(t)=\sum_{j=1}^ne^{\lambda_jt}x_{j}e_{j}. \end{equation*} \notag $$
Moreover, assume that at the initial moment of time $x_0$ lies in an ellipsoid
$$ \begin{equation*} B=\biggl\{x\in\mathbb R^n\colon \sum_{j=1}^n b_jx_j^2\leqslant1\biggr\}. \end{equation*} \notag $$
We must recover the values of the solution at time $\tau$, $0<\tau<T_1$. Let $x_{j}$ denote the coordinates of the solution at time $T_1$. Then the condition that $x_0$ lies in an ellipsoid means that
$$ \begin{equation*} \sum_{j=1}^{n}b_je^{-2\lambda_jT_1}x_{j}^2\leqslant 1. \end{equation*} \notag $$

Thus, our recovery problem reduces to the one considered above, for

$$ \begin{equation*} W=\biggl\{x\in l_2^n\colon \sum_{j=1}^n\nu_{j}x_{j}^2\leqslant1\biggr\}, \end{equation*} \notag $$
where $\nu_{j}=b_je^{-2\lambda_jT_1}$, $j=1,\dots,n$.

For $x=(x_1,\dots,x_n)\in\mathbb R^n$ set

$$ \begin{equation*} Tx=(e^{\lambda_1(T_1-\tau)}x_1,\dots,e^{\lambda_n(T_1-\tau)}x_n), \qquad Ix=(x_1,\dots,x_n). \end{equation*} \notag $$

As in the general setting, each recovery method assigns to a random vector $y\in Y_\delta(x)$ an element of $\mathbb R^n$ viewed as an approximation of $Tx$. To solve the problem stated we use Theorem 1.

Set

$$ \begin{equation*} \gamma_{j}=\frac{\sqrt{\nu_{j}}}{e^{-\lambda_j(T_1-\tau)}}\quad\text{and} \quad \xi_{j}=\biggl(\sum_{k=1}^j \nu_k \biggl(\frac{\gamma_{j}}{\gamma_{k}}-1\biggr)\biggr)^{1/2}, \qquad j=1,\dots,n. \end{equation*} \notag $$

Assume that $\gamma_1\leqslant\dots\leqslant \gamma_n$.

Theorem 4. Let $1/\delta\in(\xi_{s},\xi_{s+1}]$ for some $1\leqslant s\leqslant n-1$, or let $1/\delta\in(\xi_{n},+\infty)$ (and then set $ s=n$). Then

$$ \begin{equation*} E(T,W,I,\delta)=\delta\biggl(\sum_{k=1}^s e^{-2\lambda_k(T_1-\tau)} \biggl(1-\frac{\gamma_k} {\gamma_1}(1-c_{1})\biggr)\biggr)^{1/2}, \end{equation*} \notag $$
where
$$ \begin{equation} c_{1}=1-\frac{\delta^2 \gamma_1 \sum_{k=1}^s({\nu_k}/{\gamma_k})} {1+\delta^2\sum_{k=1}^s \nu_k}, \end{equation} \tag{4.5} $$
and the method
$$ \begin{equation*} \varphi(y)=\sum_{k=1}^s \biggl((1-\frac{\gamma_k}{\gamma_1}(1-c_{1})\biggr)e^{-\lambda_k(T_1-\tau)} y_{k}e_{k}, \end{equation*} \notag $$
is optimal.

Consider the Cauchy problem for the system of linear differential equations

$$ \begin{equation} \begin{cases} \dfrac{dx}{dt}=Ax, \\ x(0)=x_0, \end{cases} \end{equation} \tag{4.6} $$
where $x(t)\in\mathbb R^n$, $t\geqslant0$, and $A=(a_{ij})$, $a_{ij}\in\mathbb R$.

Let the matrix $ A$ be selfadjoint,

$$ \begin{equation*} \lambda_1>\lambda_2>\dots>\lambda_m \end{equation*} \notag $$
be its eigenvalues, and let $r_k$ be the multiplicity of the eigenvalue $\lambda_k$, $k=1,\dots,m$. Let $\{e_{kj}\}_{j=1}^{r_k}$ denote an orthonormal system of vectors corresponding to the eigenvalue $\lambda_k$. Then the vectors
$$ \begin{equation*} e_{11},\ \dots,\ e_{1r_1},\ \dots,\ e_{m1},\ \dots,\ e_{mr_m} \end{equation*} \notag $$
form an orthonormal basis of $\mathbb R^n$.

Let

$$ \begin{equation*} x_0=\sum_{k=1}^m \sum_{j=1}^{r_k}c_{kj}e_{kj}. \end{equation*} \notag $$
Then the solution (4.6) can be expressed in the form
$$ \begin{equation*} x(t)=\sum_{k=1}^me^{\lambda_kt}\sum_{j=1}^{r_k}c_{kj}e_{kj}. \end{equation*} \notag $$

Assume that we know the value, with some random error, of the solution of (4.6) at time $t=T_1$. As in the general case, a recovery method assigns to a random vector $y\in Y_\delta(x)$ an element of $\mathbb R^n$, set to be an approximation of $Tx$. Also assume that at the initial moment of time $\|x_0\|\leqslant R$ ($\|\cdot\|$ is the Euclidean norm in $\mathbb R^n$). We must recover the value of the solution at time $\tau$, $0<\tau<T_1$. If $x_{kj}$ denote the coordinates of the solution at time $T$, then the condition $\|x_0\|\leqslant R$ means that

$$ \begin{equation*} \sum_{k=1}^me^{-2\lambda_kT_1}\sum_{j=1}^{r_k}x_{kj}^2\leqslant R^2. \end{equation*} \notag $$

Thus, our recovery problem reduces to the one considered above, for

$$ \begin{equation*} W=\biggl\{x\in l_2^n\colon \sum_{k=1}^m\sum_{j=1}^{r_k}\nu_{kj}x_{kj}^2\leqslant1\biggr\}, \end{equation*} \notag $$
where $\nu_{k}=\nu_{kj}=R^{-2}e^{-2\lambda_kT_1}$, $k=1,\dots,m$,
$$ \begin{equation*} Tx{\kern1pt}{=}{\kern1pt}(e^{-\lambda_1(T_1-\tau)}x_{11},\dots,e^{-\lambda_1(T_1-\tau)}x_{1r_1}, \dots,e^{-\lambda_m(T_1-\tau)}x_{m1},\dots,e^{-\lambda_m(T_1-\tau)}x_{mr_m}) \end{equation*} \notag $$
and
$$ \begin{equation*} Ix=(x_{11},\dots,x_{1r_1},\dots, x_{m1},\dots,x_{mr_m}). \end{equation*} \notag $$

Set

$$ \begin{equation*} \xi_{k}=R^{-1}\biggl(\sum_{j=1}^k r_je^{-2\lambda_jT_1}\bigl(e^{(\lambda_j-\lambda_k)\tau}-1\bigr)\biggr)^{1/2}, \qquad k=1,\dots,m. \end{equation*} \notag $$

Theorem 5. Let $1/\delta\in(\xi_{s},\xi_{s+1}]$ for some $1\leqslant s\leqslant m-1$, or let $1/\delta\in(\xi_{m},+\infty)$ (and then set $ s=m$). Then

$$ \begin{equation*} E(T,W,I,\delta)=\delta\biggl(\sum_{k=1}^s r_ke^{-2\lambda_k(T_1-\tau)} \bigl(1-e^{(\lambda_1-\lambda_k)\tau}(1-c_{1})\bigr)\biggr)^{1/2}, \end{equation*} \notag $$
where
$$ \begin{equation} c_{1}=1-\frac{\delta^2R^{-2}e^{-\lambda_1\tau}\sum_{k=1}^s r_ke^{(-2T_1+\tau)\lambda_k}} {1+\delta^2R^{-2}\sum_{k=1}^sr_ke^{-2\lambda_kT_1}}, \end{equation} \tag{4.7} $$
and the method
$$ \begin{equation*} \varphi(y)=\sum_{k=1}^s \bigl(e^{-\lambda_k(T_1-\tau)}-e^{\lambda_1\tau-\lambda_kT_1}(1-c_{1})\bigr)\sum_{j=1}^{r_k} y_{kj}e_{kj}, \end{equation*} \notag $$
is optimal.

§ 5. Recovering trigonometric polynomials

Let $\mathcal T_n$ denote the set of trigonometric polynomials

$$ \begin{equation} p_n(t)=\frac{a_0}2+\sum_{j=1}^n(a_j\cos jt+b_j\sin jt). \end{equation} \tag{5.1} $$
Set
$$ \begin{equation*} \mathcal T_n^r=\{p_n(\cdot)\in\mathcal T_n\colon \|p_n^{(r)}(\cdot)\|_{L_2(\mathbb R)}\leqslant1,\, r \geqslant1\}, \end{equation*} \notag $$
where $\mathbb T=[-\pi,\pi]$ (with distinguished endpoints) and
$$ \begin{equation*} \|x(\cdot)\|_{L_2(\mathbb R)}=\biggl(\frac1\pi\int_{\mathbb T}|x(t)|^2\,dt\biggr)^{1/2}. \end{equation*} \notag $$

We consider the problem of the recovery of the $k$th derivative of a polynomial in $\mathcal T_n^r$, $0 \leqslant k <r$, from the coefficients of this polynomial, which are known with a random error. We reduce this to the general problem (3.1). The set $\mathcal T_n^r$ is the set of polynomials (5.1) such that

$$ \begin{equation*} \sum_{j=1}^nj^{2r}(a_j^2+b_j^2)\leqslant1. \end{equation*} \notag $$
Hence setting
$$ \begin{equation*} \begin{gathered} \, x=(a_0,a_1,b_1,\dots,a_n,b_n), \qquad W=\biggl\{x\in\mathbb R^{2n+1}\colon \sum_{j=1}^nj^{2r}(a_j^2+b_j^2)\leqslant1\biggr\}, \\ Ix=x\quad\text{and} \quad Tx=\biggl(\frac{a_0}{\sqrt2}\,\chi_k,a_1,b_1,\dots,n^ka_n,n^kb_n\biggr), \qquad \chi_k= \begin{cases} 1,&k=0, \\ 0,&k\geqslant1,\end{cases} \end{gathered} \end{equation*} \notag $$
we arrive at (3.1). Note that the values of the operator $T$ are coefficients of the expansion with respect to the orthonormal basis
$$ \begin{equation*} \biggl(\frac{a_0}{\sqrt2},\cos\biggl(t+\frac{\pi k}2\biggr),\sin\biggl(t+\frac{\pi k}2\biggr),\dots,\cos\biggl(nt+\frac{\pi k}2\biggr), \sin\biggl(nt+\frac{\pi k}2\biggr)\biggr). \end{equation*} \notag $$

We cannot apply Theorem 1 to this problem because here we have $\nu_1 = 0$. Thus we use a modification of this theorem. Consider a set $W$ and an operator $T$ in the case when $\nu_1=0$ and $\mu_1\geqslant0$. We introduce the notation

$$ \begin{equation*} \gamma_j=\frac{\sqrt{\nu_j}}{|\mu_j|}, \quad j=2,\dots,n,\quad\text{and} \quad \xi_j=\biggl(\sum_{k=2}^j\nu_k\biggl(\frac{\gamma_j}{\gamma_k}-1\biggr)\biggr)^{1/2}, \quad j=2,\dots, n, \end{equation*} \notag $$
and assume that $\gamma_2\leqslant\dots\leqslant\gamma_n$.

Theorem 6. Let $1/\delta\in(\xi_s,\xi_{s+1}]$ for some $2\leqslant s\leqslant n-1$, or let $1/\delta\in(\xi_n,+\infty)$ (and then set $s=n$). Then

$$ \begin{equation*} E(T,W,I,\delta)=\delta\biggl(|\mu_1|^2+\sum_{k=2}^s |\mu_k|^2\biggl(1-\frac{\gamma_k(1-c_2)}{\gamma_2}\biggr)\biggr)^{1/2}, \end{equation*} \notag $$
where
$$ \begin{equation} c_2=1-\frac{\delta^2\gamma_2\sum_{k=2}^s (\nu_k/\gamma_k)} {1+\delta^2\sum_{k=2}^s\nu_k}, \end{equation} \tag{5.2} $$
and the method
$$ \begin{equation*} \varphi(y)=\mu_1y_1e_1+\sum_{k=2}^s \biggl(1-\frac{\gamma_k(1-c_2)}{\gamma_2}\biggr) \mu_ky_ke_k \end{equation*} \notag $$
is optimal.

Proof. Using the same arguments as in the lower estimate in the proof of Theorem 1 we arrive at inequality (3.5).

In estimating the error of methods of the form

$$ \begin{equation*} \varphi(y)=\sum_{j=1}^n\alpha_jy_je_j \end{equation*} \notag $$
from above, we obtain again the equality
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)=\sup_{x\in W}\|Tx-\varphi(Ix)\|^2_{l_2^n}+\delta^2\sum_{j=1}^n|\alpha_j|^2. \end{equation*} \notag $$
However, the extremal problem
$$ \begin{equation*} \|Tx-\varphi(Ix)\|^2_{l_2^n}\to\max,\qquad x\in W, \end{equation*} \notag $$
since we now have $\nu_1=0$, can be written this time in the form
$$ \begin{equation*} \sum_{j=1}^n|\mu_j-\alpha_j|^2|x_j|^2\to\max, \qquad\sum_{j=2}^n\nu_j|x_j|^2\leqslant1. \end{equation*} \notag $$
For $\alpha_1\ne\mu_1$ the solution of this problem is equal to $\infty$, so we assume below that $\alpha_1=\mu_1$. From the inequality
$$ \begin{equation*} \begin{aligned} \, \sum_{j=2}^n|\mu_j-\alpha_j|^2|x_j|^2 &=\sum_{j=2}^n\frac{|\mu_j-\alpha_j|^2}{\nu_j}\nu_j|x_j|^2 \\ &\leqslant\max\biggl\{\frac{|\mu_2-\alpha_2|^2}{\nu_2},\dots, \frac{|\mu_n-\alpha_n|^2}{\nu_n}\biggr\}\sum_{j=2}^n\nu_j|x_j|^2 \end{aligned} \end{equation*} \notag $$
we obtain
$$ \begin{equation*} \sup_{x\in W}\|Tx-\varphi(Ix)\|^2_{l_2^n}\leqslant\max\biggl\{\frac{|\mu_2-\alpha_2|^2}{\nu_1},\dots, \frac{|\mu_n-\alpha_n|^2}{\nu_n}\biggr\}. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)\leqslant\max \biggl\{\frac{|\mu_2-\alpha_2|^2}{\nu_2},\dots, \frac{|\mu_n-\alpha_n|^2}{\nu_n}\biggr\}+\delta^2\sum_{j=1}^n|\alpha_j|^2. \end{equation*} \notag $$
Set
$$ \begin{equation*} c_j=\frac{\alpha_j}{\mu_j}, \qquad j=2,\dots,n. \end{equation*} \notag $$
Then for the error of the method $\varphi$ we have
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)\leqslant\max\biggl\{\frac{|1-c_2|^2}{\gamma_1^2},\dots, \frac{|1-c_n|^2}{\gamma_n^2}\biggr\}+\delta^2|\mu_1|^2+\delta^2\sum_{j=2}^n|\mu_j|^2|c_j|^2. \end{equation*} \notag $$

Let $1/\delta\in(\xi_s,\xi_{s+1}]$ for some $2\leqslant s\leqslant n-1$. Then it is easy to show that

$$ \begin{equation*} \frac1{\gamma_{s+1}}\leqslant\frac{\delta^2\sum_{k=2}^s({\nu_k}/{\gamma_k})} {1+\delta^2\sum_{k=2}^s\nu_k}<\frac1{\gamma_s}. \end{equation*} \notag $$
If we define $c_2$ by (5.2), then
$$ \begin{equation*} \frac1{\gamma_{s+1}}\leqslant\frac{1-c_2}{\gamma_2}<\frac1{\gamma_s}. \end{equation*} \notag $$
Let
$$ \begin{equation*} c_k=1-\gamma_k\frac{1-c_2}{\gamma_2}, \quad\text{for } k=3,\dots,s,\quad\text{and}\quad c_k=0, \quad\text{for } k=s+1,\dots,n. \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \frac{(1-c_k)^2}{\gamma_k^2}=\frac{(1-c_2)^2}{\gamma_2^2}, \qquad k=3,\dots,s. \end{equation*} \notag $$
For $k\geqslant s+1$
$$ \begin{equation*} \frac{(1-c_k)^2}{\gamma_k^2}=\frac1{\gamma_k^2}\leqslant\frac1{\gamma_{s+1}^2}\leqslant \frac{(1-c_2)^2}{\gamma_2^2}, \end{equation*} \notag $$
and therefore
$$ \begin{equation} \max\biggl\{\frac{|1-c_2|^2}{\gamma_2^2},\dots, \frac{|1-c_n|^2}{\gamma_n^2}\biggr\}=\frac{(1-c_2)^2}{\gamma_2^2}. \end{equation} \tag{5.3} $$
Consequently,
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)\leqslant\frac{(1-c_2)^2}{\gamma_2^2}+\delta^2|\mu_1|^2+ \delta^2\sum_{k=2}^s|\mu_k|^2c_k^2. \end{equation*} \notag $$
Using transformations similar to (3.7) we obtain
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)\leqslant\delta^2|\mu_1|^2+\delta^2\sum_{k=2}^s|\mu_k|^2c_k. \end{equation*} \notag $$

Consider a vector $\widehat\tau\in l_2^n$ of the form

$$ \begin{equation*} \widehat\tau_1=\tau_1, \quad\widehat\tau_k^2=\delta^2\biggl(\frac{\gamma_2}{(1-c_2)\gamma_k}-1\biggr) \text{ for } k=2,\dots,s, \quad\widehat\tau_k=0 \text{ for }k=s+1,\dots,n. \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \begin{aligned} \, \sum_{k=2}^n\nu_k\widehat\tau_k^2 &=\delta^2\sum_{k=2}^s\nu_k \biggl(\frac{\gamma_2}{(1-c_2)\gamma_k}-1\biggr)=\frac{\delta^2\gamma_2}{1-c_2} \sum_{k=2}^s\frac{\nu_k}{\gamma_k}-\delta^2\sum_{k=2}^s\nu_k \\ &=\biggl(1+\delta^2\sum_{k=2}^s\nu_k\biggr)-\delta^2\sum_{k=2}^s\nu_k=1. \end{aligned} \end{equation*} \notag $$
Thus, $\widehat\tau\in W$. Substituting $\widehat\tau$ (for $\tau_1\geqslant\widehat\tau_2$) into (3.5) we obtain
$$ \begin{equation*} \begin{aligned} \, &E^2(T,W,I,\delta) \geqslant\sum_{k=1}^s\frac{\delta^2|\mu_k|^2\widehat\tau_k^2} {\delta^2+\widehat\tau_k^2}=\frac{\delta^2|\mu_1|^2\tau_1^2} {\delta^2+\tau_1^2}+ \sum_{k=2}^s\frac{\delta^4|\mu_k|^2 ({\gamma_2}/((1-c_2)\gamma_k)-1)}{\delta^2{\gamma_2}/((1-c_2)\gamma_k)} \\ &\qquad=\frac{\delta^2|\mu_1|^2\tau_1^2} {\delta^2+\tau_1^2}+\delta^2\sum_{k=2}^s|\mu_k|^2\biggl(1-\frac{(1-c_2)\gamma_k}{\gamma_2}\biggr )=\frac{\delta^2|\mu_1|^2\tau_1^2}{\delta^2+\tau_1^2}+\delta^2\sum_{k=1}^s|\mu_k|^2c_k. \end{aligned} \end{equation*} \notag $$
Letting $\tau_1$ tend to infinity yields
$$ \begin{equation*} E^2(T,W,I,\delta)\geqslant\delta^2|\mu_1|^2+\delta^2\sum_{k=1}^s|\mu_k|^2c_k \geqslant e^2(T,W,I,\delta,\varphi). \end{equation*} \notag $$
Hence $\varphi$ is an optimal method.

Now let $1/\delta>\xi_n$. Then

$$ \begin{equation*} \frac{\delta^2\sum_{k=2}^n({\nu_k}/{\gamma_k})} {1+\delta^2\sum_{k=2}^n\nu_k}<\frac1{\gamma_n}. \end{equation*} \notag $$
Set
$$ \begin{equation*} c_2=1-\gamma_2\frac{\delta^2\sum_{k=2}^n({\nu_k}/{\gamma_k})} {1+\delta^2\sum_{k=2}^n\nu_k}. \end{equation*} \notag $$
Then
$$ \begin{equation*} \frac{1-c_2}{\gamma_2}<\frac1{\gamma_n}. \end{equation*} \notag $$
Let
$$ \begin{equation*} c_k=1-\gamma_k\frac{1-c_2}{\gamma_2}, \qquad k=3,\dots,n. \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \frac{(1-c_k)^2}{\gamma_k^2}=\frac{(1-c_2)^2}{\gamma_2^2}, \qquad k=3,\dots,n. \end{equation*} \notag $$
Thus, as in the previous case, we obtain (5.3). Using transformations similar to (3.7) for $s=n$, we obtain
$$ \begin{equation*} e^2(T,W,I,\delta,\varphi)\leqslant\delta^2|\mu_1|^2+\delta^2\sum_{k=2}^n|\mu_k|^2c_k. \end{equation*} \notag $$

Consider the vector $\widehat\tau\in l_2^n$ of the form

$$ \begin{equation*} \widehat\tau_1=\tau_1, \qquad \widehat\tau_k^2=\delta^2\biggl(\frac{\gamma_2}{(1-c_2)\gamma_k}-1\biggr), \quad k=2,\dots,n. \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \begin{aligned} \, \sum_{k=2}^n\nu_k\widehat\tau_k^2 &=\delta^2\sum_{k=2}^n\nu_k \biggl(\frac{\gamma_2}{(1-c_2)\gamma_k}-1\biggr)=\frac{\delta^2\gamma_2}{1-c_2} \sum_{k=2}^n\frac{\nu_k}{\gamma_k}-\delta^2\sum_{k=2}^n\nu_k \\ &=\biggl(1+\delta^2\sum_{k=2}^n\nu_k\biggr)-\delta^2\sum_{k=2}^n\nu_k=1. \end{aligned} \end{equation*} \notag $$
Thus, $\widehat\tau\in W$. Plugging $\widehat\tau$ (for $\tau_1\geqslant\widehat\tau_2$) into (3.5) we obtain
$$ \begin{equation*} \begin{aligned} \, E^2(T,W,I,\delta) &\geqslant\sum_{k=1}^n\frac{\delta^2|\mu_k|^2\widehat\tau_k^2} {\delta^2+\widehat\tau_k^2}=\frac{\delta^2|\mu_1|^2\tau_1^2} {\delta^2+\tau_1^2}+ \sum_{k=2}^n\frac{\delta^4|\mu_k|^2 ({\gamma_2}/((1-c_2)\gamma_k)-1)}{\delta^2{\gamma_2}/((1-c_2)\gamma_k)} \\ &=\frac{\delta^2|\mu_1|^2\tau_1^2} {\delta^2+\tau_1^2}+\delta^2\sum_{k=2}^n|\mu_k|^2 \biggl(1-\frac{(1-c_2)\gamma_k}{\gamma_2}\biggr) \\ &=\frac{\delta^2|\mu_1|^2\tau_1^2}{\delta^2+\tau_1^2}+\delta^2\sum_{k=1}^n|\mu_k|^2c_k. \end{aligned} \end{equation*} \notag $$
Letting $\tau_1$ tend to infinity yields
$$ \begin{equation*} E^2(T,W,I,\delta)\geqslant\delta^2|\mu_1|^2+\delta^2\sum_{k=1}^n|\mu_k|^2c_k \geqslant e^2(T,W,I,\delta,\varphi). \end{equation*} \notag $$
Hence $\varphi$ is an optimal method.

Theorem 6 is proved.

Now we apply this theorem to the solution of our problem. Set

$$ \begin{equation*} \xi_j=\biggl(2\sum_{l=2}^j(l-1)^{r+k}((j-1)^{r-k}-(l-1)^{r-k}) \biggr)^{1/2}, \qquad j=2,\dots, n+1. \end{equation*} \notag $$

Theorem 7. Let $1/\delta\in(\xi_s,\xi_{s+1}]$ for some $2\leqslant s\leqslant n$, or let $1/\delta\in(\xi_{n+1},+\infty)$ (and then set $s=n+1$). Then

$$ \begin{equation*} E(T,W,I,\delta)=\delta\biggl(\frac{\chi_k^2}{2}+2\sum_{l=2}^{s} \bigl((l-1)^{2k}-(l-1)^{r+k}(1-c_2)\bigr)\biggr)^{1/2}, \end{equation*} \notag $$
where
$$ \begin{equation} c_2=1-\frac{2\delta^2 \sum_{l=2}^{s} (l-1)^{r+k}} {1+2\delta^2 \sum_{l=2}^s(l-1)^{2r}}, \end{equation} \tag{5.4} $$
and the method
$$ \begin{equation*} \begin{aligned} \, \varphi(y)&=\frac{\widetilde{a_0}}{\sqrt{2}}\chi_k+\sum_{l=2}^{s} \bigl(1-(l-1)^{r-k}(1-c_2)\bigr) \\ &\qquad \times\biggl((l-1)^k\widetilde{a}_{l-1}\cos\biggl((l-1)t+\frac{\pi k}{2}\biggr)+(l-1)^k\widetilde{b}_{l-1}\sin\biggl((l-1)t+\frac{\pi k}{2}\biggr)\biggr) \end{aligned} \end{equation*} \notag $$
where $y=(\widetilde{a}_0, \widetilde{a}_1, \widetilde{b}_1, \dots, \widetilde{a}_n, \widetilde{b}_n)$, is optimal.


Bibliography

1. S. M. Nikol'skii, “On estimates of approximations by quadrature formulae”, Uspekhi Mat. Nauk, 5:2(36) (1950), 165–177 (Russian)  mathnet  mathscinet  zmath
2. S. A. Smolyak, Optimal recovery of functions and functionals of functions, Candidate dissertation, Moscow State University, Moscow, 1965, 152 pp. (Russian)
3. A. Sard, “Best approximate integration formulas; best approximation formulas”, Amer. J. Math., 71:1 (1949), 80–91  crossref  mathscinet  zmath
4. J. F. Traub and H. Woźniakowski, A general theory of optimal algorithms, ACM Monogr. Ser., Academic Press, Inc., New York–London, 1980, xiv+341 pp.  mathscinet  zmath
5. K. Yu. Osipenko, Introduction to the theory of optimal recovery, Lan', St Petersburg, 2022, 388 pp. (Russian)
6. K. Yu. Osipenko, Optimal recovery of analytic functions, Nova Science Publ., Huntington, NY, 2000, 220 pp.
7. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “Exactness and optimality of methods for recovering functions from their spectrum”, Proc. Steklov Inst. Math., 293 (2016), 194–208  mathnet  crossref  mathscinet  zmath
8. K. Yu. Osipenko, “On the reconstruction of the solution of the Dirichlet problem from inexact initial data”, Vladikavkaz Mat. Zh., 6:4 (2004), 55–62 (Russian)  mathnet  mathscinet  zmath
9. G. G. Magaril-Il'yaev and V. M. Tikhomirov, Convex analysis: theory and applications, 3d revised ed., Librokom, Moscow, 2011, 176 pp.; English transl. of 2nd ed., Transl. Math. Monogr., 222, Amer. Math. Soc., Providence, RI, 2003, viii+183 pp.  mathscinet  zmath
10. L. Plaskota, Noisy information and computational complexity, Cambridge Univ. Press, Cambridge, 1996, xii+308 pp.  crossref  mathscinet  zmath  adsnasa
11. D. L. Donoho, R. C. Liu and B. MacGibbon, “Minimax risk over hyperrectangles, and implications”, Ann. Statist., 18:3 (1990), 1416–1437  crossref  mathscinet  zmath
12. D. L. Donoho, “Statistical estimation and optimal recovery”, Ann. Statist., 22:1 (1994), 238–270  crossref  mathscinet  zmath
13. S. Reshetov, “Minimax risk over quadratically convex sets”, J. Math. Sci. (N.Y.), 167:4 (2010), 537–542  mathnet  crossref  mathscinet  zmath
14. K. Yu. Krivosheev, “On optimal recovery of values of linear operators from information known with a stochastic error”, Sb. Math., 212:11 (2021), 1588–1607  mathnet  crossref  mathscinet  zmath  adsnasa
15. M. Wilczyński, “Minimax estimation in linear regression with ellipsoidal constraints”, J. Statist. Plann. Inference, 137:1 (2007), 79–86  crossref  mathscinet  zmath
16. M. Wilczyński, “Minimax estimation over ellipsoids in $\ell_2$”, Statistics, 42:2 (2008), 95–100  crossref  mathscinet  zmath
17. E. V. Vvedenskaya, “On the optimal recovery of a solution of a system of linear homogeneous differential equations”, Differ. Equ., 45:2 (2009), 262–266  crossref  mathscinet  zmath

Citation: I. S. Maksimova, K. Yu. Osipenko, “Optimal recovery of a solution of a system of linear differential equations from initial information given with a random error”, Sb. Math., 216:4 (2025), 515–537
Citation in format AMSBIB
\Bibitem{MakOsi25}
\by I.~S.~Maksimova, K.~Yu.~Osipenko
\paper Optimal recovery of a~solution of a~system of linear differential equations from initial information given with a~random error
\jour Sb. Math.
\yr 2025
\vol 216
\issue 4
\pages 515--537
\mathnet{http://mi.mathnet.ru/eng/sm10109}
\crossref{https://doi.org/10.4213/sm10109e}
\mathscinet{https://mathscinet.ams.org/mathscinet-getitem?mr=4920940}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2025SbMat.216..515M}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001514098800004}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-105011320061}
Linking options:
  • https://www.mathnet.ru/eng/sm10109
  • https://doi.org/10.4213/sm10109e
  • https://www.mathnet.ru/eng/sm/v216/i4/p67
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025