Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2025, Volume 216, Issue 2, Pages 140–167
DOI: https://doi.org/10.4213/sm10084e
(Mi sm10084)
 

This article is cited in 1 scientific paper (total in 1 paper)

Lyapunov stability of an equilibrium of the nonlocal continuity equation

Yu. V. Averboukh, A. M. Volkov

N. N. Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences, Ekaterinburg, Russia
References:
Abstract: The paper is devoted to developing Lyapunov's methods for analyzing the stability of an equilibrium of a dynamical system in the space of probability measures that is defined by a nonlocal continuity equation. Sufficient stability conditions are obtained based on the basis of an analysis of the behaviour of a nonsmooth Lyapunov function in a neighbourhood of the equilibrium and the investigation of a certain quadratic form defined on the tangent space of the space of probability measures. The general results are illustrated by the study of the stability of an equilibrium for a gradient flow in the space of probability measures and the Gibbs measure for a system of connected simple pendulums.
Bibliography: 28 titles.
Keywords: nonlocal continuity equation, Lyapunov's second method, nonsmooth Lyapunov function, stability, derivatives in the space of measures.
Received: 15.02.2024 and 09.10.2024
Published: 16.04.2025
Bibliographic databases:
Document Type: Article
MSC: Primary 34D20; Secondary 35B35, 35F20, 35Q70, 35R06, 82C22
Language: English
Original paper language: Russian

§ 1. Introduction

This paper is devoted to the study of the qualitative properties of the dynamical system in the space of probability measures defined by the nonlocal continuity equation

$$ \begin{equation*} \partial_t m_t+\operatorname{div} (f(x,m_t) m_t)=0. \end{equation*} \notag $$
This equation describes the behaviour of a system of infinitely many identical particles in the case when the right-hand side does not only depend on the positions of the particles but also on their current distribution. In this case the function $f$ plays the role of a vector field governing the motion of each particle.

Note that the continuity equation arises in models of systems of charged particles (see [1]), of the behaviour of supermassive black holes (see [2]), of the behaviour of large groups of animals (see [3]), of the dynamics of biological processes (see [4]), of the dynamics of public opinion (see [5]) and so on.

Previously, some properties of the nonlocal continuity equation such as stability with respect to the parameters (see [6]) and exponential stability (see [7]) were considered in the literature. In addition, the paper [8] dealt with problems of the stability of the support of a measure and integral stability in the case when the equilibrium in question is a Dirac measure.

In our paper we study Lyapunov stability. This concept was first proposed by Lyapunov in his famous works [9] and [10] for systems of ordinary differential equations, and thereafter gained numerous applications. Since that time, the concept of Lyapunov stability has been attracting much attention from researchers (see, in particular, [11]–[15]).

The study of stability is based on Lyapunov’s first and second methods (see [9] and [10]). In the first method the stability of a system is characterized in terms of its linear approximation. In the second method we assume the existence of a differentiable function with certain properties, which is called a Lyapunov function. We also note that stability problems for dynamical systems on Banach and metric spaces were considered in [16]–[18]. The proof of Lyapunov’s second method in these cases is based on directional derivatives of a nonsmooth Lyapunov function.

Lyapunov’s methods have received significant development in the area of controlled systems, especially in stabilization problems (see [19] and [20]). Note that nonsmooth Lyapunov functions are often used in the control theory. In this case sub- and supergradients are used instead of derivatives.

One feature of the dynamical system under consideration, defined by the nonlocal continuity equation, lies in the fact that its phase space, which is the space of probability measures, is not linear. At the same time, we can introduce the notion of intrinsic derivative (see [21]) on it, as well as a number of analogues of concepts from nonsmooth analysis (see [22]) including, in particular, strong and weak sub- and superdifferentials. Note that in the space of probability measures the squared distance to a given measure is generally a nondifferentiable function. However, some superdifferential elements can be described explicitly (see [22]).

To construct an analogue of Lyapunov’s second method, a nonsmooth Lyapunov function is used in this paper. The proposed construction of Lyapunov’s second method is based on the definition of a barycentric subdifferential (superdifferential) presented in § 2.3. Note that the barycentric subdifferential (superdifferential) can be constructed on the basis of the strong Fréchet subdifferential (superdifferential) proposed in [22]. This method makes it possible, in particular, to find sufficient stability conditions for systems of particles with dynamics specified by a gradient flow in the space of probability measures.

Based on Lyapunov’s second method, we study the stability of a stationary solution of the continuity equation in the case when the vector field is linear with respect to the phase variables. For this result we use the second method with the Lyapunov function equal to one half of the squared distance to the equilibrium. As an illustration the stability of the Gibbs measure in a system of connected simple pendulums is considered.

The rest of the paper is structured as follows.

In § 2 we present the general definitions and notation used in the paper. Equivalent definitions of a solution of the continuity equation are introduced; some properties of these solutions are described. These properties are proved in § 5.1.

Subsection 2.3 is devoted to generalizations of concepts in nonsmooth analysis to functions on the space of probability measures. In that subsection we introduce the notions of intrinsic derivative borrowed from [21] and of barycentric subdifferentials (superdifferentials) of functionals of measures. Some properties of these objects are described; their proofs are presented in § 5.2.

Section 3 presents a generalization of Lyapunov’s second method for an analysis of the stability of a dynamical system. In that section the Lyapunov function is not assumed to be smooth but only barycentrically superdifferentiable in a neighbourhood of the equilibrium. In § 3.1 we consider an example of dynamics defined by a gradient flow.

In § 4 stability conditions are deduced for systems with linear vector fields. We also consider there an example of dynamics specified by a system of connected simple pendulums.

Subsection 5.1 contains proofs of the properties of trajectories generated by the nonlocal continuity equation.

In § 5.2 we give the proofs of the properties of functions on the space of probability measures described previously and relating their derivatives of different types.

Finally, § 5.3 contains the proof of the boundedness of barycentric subdifferentials (superdifferentials) of locally Lipschitz functions.

§ 2. Terminology and notation

2.1. General definitions and notation

Recall that a metric space $(X,\rho)$ is called Polish if it is complete and separable.

We assume in this subsection that $(X,\rho)$ is a Polish space, $( Y, \|\,{\cdot}\,\| )$ is a separable Banach space and $p > 1$.

In what follows we use the following notation:

$\bullet$ $\mathbb{R}_+$ is the ray $[0,+\infty)$ on the number axis $\mathbb{R}$;

$\bullet$ $\mathbb R^d$ is the space of $d$-dimensional column vectors;

$\bullet$ $\mathbb{R}^{d*}$ is the space of $d$-dimensional row vectors;

$\bullet$ ${}^\top$ is the operation of transposition;

$\bullet$ $\mathsf{B}_R(x)$ is the closed ball of radius $R$ with centre $x$, that is,

$$ \begin{equation*} \mathsf{B}_R(x) \triangleq \{ y \in X\colon \rho(x,y) \leqslant R \}; \end{equation*} \notag $$

$\bullet$ $\mathsf{S}_R(x)$ is the sphere of radius $R$ with centre $x$, that is,

$$ \begin{equation*} \mathsf{S}_R(x) \triangleq \{ y \in X\colon \rho(x,y)=R \}; \end{equation*} \notag $$

$\bullet$ $\Gamma_T(X)$ is the space of curves in $X$ in the sense of continuous functions on the interval $[0,T]$, that is,

$$ \begin{equation*} \Gamma_T(X) \triangleq C([0,T];X); \end{equation*} \notag $$

$\bullet$ $e_t\colon \Gamma_T(X) \to X$ is the evaluation operator defined by

$$ \begin{equation*} e_t(x(\,{\cdot}\,)) \triangleq x(t) \quad \text{for } t \in [0,T]; \end{equation*} \notag $$

$\bullet$ $C_{\mathrm b}(X)$ is the space of bounded continuous functionals on $X$;

$\bullet$ $C^\infty_{\mathrm c}(X)$ is the space of infinitely differentiable functionals with compact support on a finite-dimensional space $X$;

$\bullet$ $\mathrm{Id}\colon X \to X$ is the identity function, that is,

$$ \begin{equation*} \operatorname{Id}(x)=x \end{equation*} \notag $$
for an arbitrary element $x \in X$;

$\bullet$ if $(\Omega,\mathcal{F})$ and $(\Omega',\mathcal{F}')$ are measurable spaces, $\mu$ is a measure on $\mathcal{F}$ and a map $h\colon \Omega \to \Omega'$ is $\mathcal{F}/\mathcal{F}'$-measurable, then $h\sharp \mu$ is the push-forward measure of $\mu$ through the function $h$ defined by

$$ \begin{equation*} (h\sharp \mu)(\Upsilon)=\mu(h^{-1}(\Upsilon)) \end{equation*} \notag $$
for an arbitrary $\Upsilon \in \mathcal{F}'$;

$\bullet$ $\mathcal{P}(X)$ is the space of Borel probability measures over $X$;

$\bullet$ $\operatorname{supp} \mu$ is the support of a measure $\mu$;

$\bullet$ $L_p(X,\mu;Y)$ is the space of functions from $X$ to $Y$ such that the $p$th powers of their norms are integrable with respect to $\mu$;

$\bullet$ $\varsigma_p(\mu)$ is the $p$th root of the $p$th moment of $\mu$, that is,

$$ \begin{equation*} \varsigma_p(\mu) \triangleq \biggl(\int_Y \|x\|^p\,\mu(dx) \biggr)^{1/p}= \|\operatorname{Id}\|_{L_p(Y,\mu;Y)}; \end{equation*} \notag $$

$\bullet$ $\mathcal{P}_p(X)$ is the subspace of elements of $\mathcal{P}(X)$ with finite $p$th moment;

$\bullet$ if $\{X_i\}_{i=1}^n$ is a finite set of arbitrary Polish spaces and $x= (x_i)_{i=1}^n \in X_1 \times \dots \times X_n$, then $p^I$ is the projection operator defined by

$$ \begin{equation*} \mathfrak{p}^I(x)=(x_i)_{i \in I}. \end{equation*} \notag $$

Below we need the following obvious property of the images of measures under maps of a certain form.

Proposition 2.1. Let $m \in \mathcal{P}_p(Y)$, $\Phi \in L_p(Y,m;Y)$ and $\mu=(\operatorname{Id}+ \Phi)\sharp m$. Then

$$ \begin{equation*} \varsigma_p(\mu) \leqslant \varsigma_p(m)+\|\Phi\|_{L_p(Y,m;Y)}. \end{equation*} \notag $$

Definition 2.2 (see [22], Definition 8.4.1). Let $\mu \in \mathcal{P}_p(\mathbb R^d)$. Then the tangent space of the space $\mathcal{P}_p(\mathbb R^d)$ at $\mu$ is the closure of the set of gradients of test functions in the space $L_p(\mathbb R^d,\mu;\mathbb{R}^{d*})$, that is,

$$ \begin{equation*} \operatorname{Tan}(\mu) \triangleq \overline{\bigl\{ \nabla \varphi(x) \in C^\infty_{\mathrm c}(X) \bigr\}}^{L_p(\mathbb R^d,\mu;\mathbb{R}^{d*})}. \end{equation*} \notag $$

Definition 2.3 (see [23], Problem 1.2 and § 5.1). A plan between measures ${\mu \mkern-1mu\!\in\! \mathcal{P}(\mkern-1mu X\mkern-1mu)}$ and $\nu \in \mathcal{P}(Y)$ is a measure $\pi \in \mathcal{P}(X \times Y)$ such that

$$ \begin{equation*} \pi(A \times Y)=\mu(A), \qquad \pi(X \times B)=\nu(B) \end{equation*} \notag $$
for arbitrary $A\subseteq X$ and $B \subseteq Y$.

The set of all plans is denoted by $\Pi(\mu,\nu)$.

Definition 2.4 (see [23], Problem 1.2 and § 5.1). The Kantorovich metric between two measures $\mu \in \mathcal{P}_p(X)$ and $\nu \in \mathcal{P}_p(X)$ is the function

$$ \begin{equation*} W_p(\mu,\nu) \triangleq \biggl( \inf_{\pi \in \Pi(\mu,\nu)} \int_{X \times X} \rho(x,y)^p\, \pi(dx\,dy) \biggr)^{1/p}. \end{equation*} \notag $$

A plan $\pi$ on which the infimum is attained is said to be optimal, and the set of all plans of this type is denoted by $\Pi_{\mathrm o}(\mu,\nu)$.

If an optimal plan can be expressed as $(\operatorname{Id},P)\sharp \mu$, then $P$ is called an optimal transport.

Remark 2.5. In [23] some important properties of the Kantorovich metric were proved.

1. The set $\Pi_{\mathrm o}$ is always nonempty (see [23], Problem 1.2 and Theorem 1.7).

2. The Kantorovich metric is indeed a metric in $\mathcal{P}_p(X)$ (see [23], Proposition 5.1).

3. The space $(\mathcal{P}_p(X), W_p)$ is Polish (see [23; Theorem 5.11]).

Remark 2.6. Since

$$ \begin{equation*} W_p(\mu,\delta_0)=\varsigma_p(\mu), \end{equation*} \notag $$
the boundedness of a set in the space of measures in the sense of its inclusion in some ball is equivalent to the uniform boundedness of $\varsigma_p(\mu)$ for all measures $\mu$ in this set.

Definition 2.7. A function $\phi\colon \mathcal{P}_p(X) \to \mathbb{R}$ is said to be locally Lipschitz if for each $\alpha > 0$ there exists $K_\alpha > 0$ such that

$$ \begin{equation*} |\phi(m_1) - \phi(m_2)| \leqslant K_\alpha W_p(m_1,m_2) \end{equation*} \notag $$
for any measures $m_1,m_2 \in \mathcal{P}_p(X)$ such that $\varsigma_p(m_i) \leqslant \alpha$.

We assume that $K_\alpha$ is a nondecreasing function of $\alpha$ that is right continuous and has a limit from the left (a càdlàg function).

Note that $K_\alpha$ is bounded on a compact set of values of $\alpha$.

Definition 2.8 (see [24], Definition 10.4.2). Let $\pi \in \mathcal{P}(X_1 \times \dots \times X_n)$ and $\mathfrak{p}^k\sharp \pi \in \mathcal{P}(X_k)$ for some $k$. We introduce the notation

$$ \begin{equation*} \widehat{X}_k \triangleq X_1 \times \dots \times X_{k-1} \times X_{k+1} \times \dots \times X_n. \end{equation*} \notag $$
Then a family of measures $\{ \widehat{\pi}^k(\,\cdot\mid x_k)\colon x_k \in X_k \} \subseteq \mathcal{P}(\widehat{X}_k)$ is said to be a disintegration of $\pi$ with respect to the $k$th variable if
$$ \begin{equation*} \begin{aligned} \, &\int_{X_1 \times \dots \times X_n} \phi(x_1,\dots,x_n)\,\pi(dx_1 \dotsb dx_n) \\ &\qquad =\int_{X_k} \biggl( \int_{\widehat{X}_k} \phi(x_1,\dots,x_n)\,\widehat{\pi}^k(dx_1 \dotsb dx_{k-1} dx_{k+1} \dotsb dx_n \mid x_k) \biggr)\,(\mathfrak{p}^k\sharp\pi)(dx_k) \end{aligned} \end{equation*} \notag $$
for each test function $\phi \in C_{\mathrm b}(X_1 \times \dots \times X_n)$.

Remark 2.9. If $X_i=\mathbb R^d$ for all $i$, then it follows from [24], Corollary 10.4.13, that disintegrations exist for any measures in $\mathcal{P}(X_1 \times \dots \times X_n)$ with respect to any variables.

2.2. The nonlocal continuity equation

The main object in this paper is the initial value problem for the nonlocal continuity equation:

$$ \begin{equation} \partial_t m_t+\operatorname{div} (f(x,m_t) m_t)=0, \end{equation} \tag{2.1} $$
$$ \begin{equation} m_0=m_*, \end{equation} \tag{2.2} $$
where $f\colon \mathbb R^d \times \mathcal{P}_p(\mathbb R^d) \to \mathbb R^d$ is a vector field. Throughout what follows $p > 1$ is a fixed parameter. As already noted above, this equation describes a system of particles with dynamics specified by the equation
$$ \begin{equation} \dot{x}=f(x,m_t). \end{equation} \tag{2.3} $$

From now on, we assume that the following conditions hold.

Assumption 2.10. The function $f$ is Lipschitz continuous on the set of variables, that is, there is a constant $C_0$ such that

$$ \begin{equation*} \|f(x,\mu)-f(y,\nu)\| \leqslant C_0 \bigl(\|x-y\|+W_p(\mu,\nu)\bigr) \end{equation*} \notag $$
for arbitrary $x,y \in \mathbb R^d$ and $\mu,\nu \in \mathcal{P}_p(\mathbb R^d)$.

Remark 2.11. Setting $\nu=\delta_0$ and $y=0$ in Assumption 2.10 we obtain a sublinear growth of $f$, that is, there is a constant $C_1$ such that

$$ \begin{equation*} \|f(x,\mu)\| \leqslant C_1\bigl(1+\|x\|+\varsigma_p(\mu)\bigr) \end{equation*} \notag $$
for arbitrary $x \in \mathbb R^d$ and $\mu \in \mathcal{P}_p(\mathbb R^d)$.

In what follows we introduce two definitions of a solution of the nonlocal continuity equation (2.1) on the interval $[0,T]$. Of course, a solution of the initial value problem (2.1), (2.2) is a solution of the continuity equation (2.1) satisfying condition (2.2).

Definition 2.12. A measure-valued function $m_{\cdot}$ is a solution of the continuity equation (2.1) on the interval $[0,T]$ in the sense of distributions if it satisfies the relation

$$ \begin{equation} \int_0^T \int_{\mathbb R^d} \bigl( \partial_t \varphi(x,t)+\nabla_x \varphi(x,t) \cdot f(x,m_t) \bigr)\, m_t(dx)\,dt=0 \end{equation} \tag{2.4} $$
for each function $\varphi \in C^\infty_{\mathrm c}(\mathbb R^d \times (0,T);\mathbb{R})$.

Definition 2.13. A measure-valued function $m_{\cdot}$ is a solution of the continuity equation (2.1) on the interval $[0,T]$ in the sense of Kantorovich if there exists a measure $\eta \in \mathcal{P}_p(\Gamma_T(\mathbb R^d))$ such that

It follows from Proposition 8.2.1 of [22], that Definitions 2.12 and 2.13 of a solution of the continuity equation are equivalent. It was proved in [25] that a solution of this type exists and is unique under even more general conditions than those considered in our paper.

Remark 2.14. Due to the fact that the solution exists for any $T > 0$ and the solution on a larger interval is an extension of the solution on a smaller one, we can assume that the solution is defined on the whole half-axis $\mathbb{R}_+$.

In what follows we let $\mathsf{X}^{s,z}_{m_{\cdot}}(r)$ denote the solution of the initial value problem

$$ \begin{equation*} \frac{d}{dt}x(t)=f(x(t),m_t), \qquad x(s)=z \end{equation*} \notag $$
for a fixed trajectory $m_{\cdot}$, calculated at time $r$.

At the end of this subsection we give some properties of the solution of the nonlocal continuity equation.

Proposition 2.15. Assume that $T > 0$, $m_*$ and $\alpha$ are such that $\varsigma_p(m_*) \leqslant \alpha$, and $m_{\cdot} \in \Gamma_T(\mathcal{P}_p(\mathbb R^d))$ is a solution of the initial value problem (2.1), (2.2). Then there exists a function $G_1(T,\alpha)$ such that the trajectory $\{ m_t\colon t \in [0,T] \}$ of the solution satisfies

$$ \begin{equation*} \varsigma_p(m_t) \leqslant G_1(T,\alpha) \end{equation*} \notag $$
for all $t \in [0,T]$.

Proposition 2.16. Assume that $T > 0$, $m_*$ and $\alpha$ are such that $\varsigma_p(m_*) \leqslant \alpha$, and $m_{\cdot} \in \Gamma_T(\mathcal{P}_p(\mathbb R^d))$ is a solution of the initial value problem (2.1), (2.2). Then there exists a function $G_2(T,\alpha)$ such that

$$ \begin{equation*} \biggl( \int_{\mathbb R^d} \|f(\mathsf{X}^{s,x}_{m_{\cdot}}(r),m_r) - f(x,m_s)\|^p\,m_s(dx) \biggr)^{1/p} \leqslant G_2(T,\alpha)(r-s) \end{equation*} \notag $$
for all $0 \leqslant s \leqslant r \leqslant T$ and $x \in \mathbb R^d$.

Proofs of these statements are given in § 5.1.

2.3. Differentiability in the space of probability measures

In spaces of measures there are rather many different generalizations of the concept of differentiability. Some of them can be found in [22], [26] and [21]. We will use the notion of intrinsic derivative proposed by P.-L. Lions. In addition, we introduce the notions of barycentric sub- and superdifferentials. The relationships between some variants of generalizations of the notion of differential are discussed in more details in § 5.2.

Following [21], to introduce the notion of intrinsic derivative of a functional on the space of probability measures we define the flat derivative.

Definition 2.17 (see [21], Definition 2.2.1). Let $\Phi\colon \mathcal{P}_p(\mathbb R^d) \to \mathbb{R}$. Then the flat derivative of a functional $\Phi$ at a point $m_* \in \mathcal{P}_p(\mathbb R^d)$ is a function ${\delta {\Phi}}/{\delta m}$: $\mathcal{P}_p(\mathbb R^d) \times \mathbb R^d \to \mathbb{R}$ such that

$$ \begin{equation*} \lim_{s \downarrow 0} \frac{\Phi((1-s)m_*+sm) - \Phi(m_*)}s =\int_{\mathbb R^d} \frac{\delta {\Phi}}{\delta m}(m_*,y)\,\bigl[m(dy) - m_*(dy) \bigr] \end{equation*} \notag $$
for any measure $m \in \mathcal{P}_p(\mathbb R^d)$.

Remark 2.18. If $\Phi\colon \mathcal{P}_p(\mathbb R^d) \to \mathbb{R}$ has a flat derivative, then

$$ \begin{equation*} \Phi(m) - \Phi(m_*)=\int_0^1 \int_{\mathbb R^d} \frac{\delta {\Phi}}{\delta m}((1-s)m_*+sm,y)\,\bigl[m(dy) - m_*(dy) \bigr]ds \end{equation*} \notag $$
for any $m_*,m \in \mathcal{P}_p(\mathbb R^d)$.

Remark 2.19. Since ${\delta {\Phi}}/{\delta m}$ is defined up to an additive constant, we fix it by taking the normalization

$$ \begin{equation*} \int_{\mathbb R^d} \frac{\delta {\Phi}}{\delta m}(m,y)\,m(dy)=0. \end{equation*} \notag $$

Definition 2.20 (see [21], Definition 2.2.2). Let $\Phi\colon \mathcal{P}_p(\mathbb R^d) \to \mathbb{R}$ have a flat derivative ${\delta {\Phi}}/{\delta m}$ that is differentiable with respect to the second argument. Then the intrinsic derivative of $\Phi$ at a point $m \in \mathcal{P}_p(\mathbb R^d)$ is the function $\nabla_m \Phi\colon \mathcal{P}_p(\mathbb R^d) \times \mathbb R^d \to \mathbb{R}^{d*}$ defined by

$$ \begin{equation*} \nabla_m \Phi(m,y) \triangleq \nabla_y \frac{\delta {\Phi}}{\delta m}(m,y). \end{equation*} \notag $$

Proposition 2.21. Assume that $m_*,m \in \mathcal{P}_p(\mathbb R^d)$, $\pi \in \Pi(m_*,m)$, $\Phi\colon \mathcal{P}_p(\mathbb R^d) \to \mathbb{R}$ and the flat derivative ${\delta {\Phi}}/{\delta m}$ is differentiable with respect to the second argument. Then the variational and intrinsic derivatives satisfy the relation

$$ \begin{equation} \begin{aligned} \, \notag &\int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y)\,\bigl[m(dy) - m_*(dy)\bigr] \\ &\qquad =\int_{\mathbb R^d\times\mathbb R^d} \int_0^1 \nabla_m \Phi((1-s)m_*+sm,y_*+q(y-y_*))\,dq\, (y-y_*)\,\pi(dy_*\,dy). \end{aligned} \end{equation} \tag{2.5} $$

This is proved in § 5.2.

Finally, we introduce the notions of barycentric sub- and superdifferentials.

Definition 2.22. Assume that $q\,{=}\,p'\,{=}\,{p}/(p\,{-}\,1)$ and a functional $\phi\colon \mathcal{P}_p(\mathbb R^d) {\kern1pt}{\to}\, \mathbb{R}$ is upper semicontinuous. Then the barycentric superdifferential $\partial^+_b \phi(m)$ of $\phi$ at a point $m$ is the set of all functions $\gamma \in L_q(\mathbb{R}^{d*},m;\mathbb{R}^{d*})$ such that for any function $b \in L_p(\mathbb R^d,m;\mathbb R^d)$ there exists a function $\xi\colon \mathbb{R}_+\to \mathbb{R}$ with the following properties:

The barycentric subdifferential $\partial^-_b\phi(m)$ of $\phi$ at the point $m$ is the set of functions $\gamma \in L_q(\mathbb{R}^{d*},m;\mathbb{R}^{d*})$ such that

$$ \begin{equation*} (-\gamma) \in \partial^+_b(-\phi)(m). \end{equation*} \notag $$

The barycentric differential $\partial_b\phi(m)$ of $\phi$ at the point $m$ is the intersection $\partial^+_b\phi(m) \cap \partial^-_b\phi(m)$.

From now on, except in § 5, sub- and superdifferentiability are understood precisely as barycentric sub- and superdifferentiability.

Remark 2.23. If a function has a nonempty barycentric differential, then the latter contains precisely one element.

Proposition 2.24. Assume that $m \in \mathcal{P}_p(\mathbb R^d)$ and a function $\phi\colon \mathcal{P}_p(\mathbb R^d) \to \mathbb{R}$ has an intrinsic derivative $\nabla_m \phi$ at $m$. Then $\nabla_m \phi(m,\cdot) \in \partial_b\phi(m)$.

This is proved in § 5.2.

The boundedness property of the barycentric superdifferential of a locally Lipschitz function, which we regard as also being of interest, is described in § 5.3.

§ 3. Lyapunov’s second method

We introduce the main notions of the theory of Lyapunov stability.

Definition 3.1. We say that $\widehat{m}$ is an equilibrium for equation (2.1) if the measure-valued function $m_{\cdot}$ defined as $m_t \equiv \widehat{m}$ is a solution of the continuity equation (2.1). Using Definition 2.12, we can formulate this condition as the equality

$$ \begin{equation*} \operatorname{div} (f(x,\widehat{m}) \widehat{m})=0, \end{equation*} \notag $$
which is understood in the sense of distributions.

Definition 3.2. We say that an equilibrium $\widehat{m}$ is stable if for each $\varepsilon>0$ there exists $\delta>0$ such that

$$ \begin{equation*} W_p(\widehat{m},m_t) < \varepsilon \end{equation*} \notag $$
for all $m_*$ satisfying $W_p(\widehat{m},m_*) < \delta$ and all $t>0$, where $m_{\cdot}$ is a solution of the continuity equation with the initial condition $m_0=m_*$.

We now state the main result in this section, namely, an analogue of Lyapunov’s second method for establishing the stability of an equilibrium. Recall that $\mathsf{S}_\varepsilon(\widehat{m})$ and $\mathsf{B}_R(\widehat{m})$ are the sphere of radius $\varepsilon$ and the ball of radius $R$ with centre $\widehat{m}$, respectively.

Definition 3.3. Assume that $\widehat{m}$ is an equilibrium of the continuity equation (2.1) and $\phi\colon \mathcal{P}_p(\mathbb R^d) \to \mathbb{R}_+$ is a locally Lipschitz function such that $\phi(\widehat{m})=0$ and

$$ \begin{equation*} \inf_{\mu \in \mathsf{S}_\varepsilon(\widehat{m})} \phi(\mu) > 0 \end{equation*} \notag $$
for some $R>0$ and each $\varepsilon \leqslant R$. The function $\phi$ is called a superdifferentiable Lyapunov function for the equilibrium $\widehat{m}$ if it additionally satisfies the following conditions:

The function $\phi$ is called a subdifferentiable Lyapunov function for the equilibrium $\widehat{m}$ if it additionally satisfies the following conditions:

The function $\phi$ is called a differentiable Lyapunov function for the equilibrium $\widehat{m}$ if it additionally satisfies the following conditions:

Theorem 3.4 (Lyapunov’s second method for the nonlocal continuity equation). Assume that a measure $\widehat{m} \in \mathcal{P}_p(\mathbb R^d)$ is an equilibrium for equation (2.1), $f\colon \mathbb R^d \times \mathcal{P}_p(\mathbb R^d) \!\to\! \mathbb R^d$ satisfies Assumption 2.10, and there exists a superdifferentiable Lyapunov function $\phi$ for the equilibrium $\widehat{m}$. Then the equilibrium $\widehat{m}$ is stable.

To prove Theorem 3.4 we need an auxiliary lemma stating that the map ${t\!\mapsto\!\mkern-1mu \phi(\mkern-1mu m_t\mkern-1mu)}$ , where $\phi$ is a Lyapunov function, has a weak maximum at the point $t=0$.

Lemma 3.5. Assume that $T>0$, $m_* \in \mathcal{P}_p(\mathbb R^d)$, $m_{\cdot} \in \Gamma_T(\mathcal{P}_p(\mathbb R^d))$ is a solution of the initial value problem (2.1), (2.2), and ${\phi\colon \mathcal{P}_p(\mathbb R^d) \to \mathbb{R}_+}$ is a locally Lipschitz function satisfying the following conditions:

Then
$$ \begin{equation*} \phi(m_T) - \phi(m_0) \leqslant 0. \end{equation*} \notag $$

Proof. First of all, we need the following fact: there exists a constant $C(T,m_*)$ such that for each $\varepsilon > 0$ and all $s \in [0,T)$ there exists $r \in (0,T]$ such that
$$ \begin{equation} \phi(m_r) - \phi(m_s) \leqslant C(T,m_*) \varepsilon (r-s). \end{equation} \tag{3.1} $$

We take an arbitrary $\varepsilon > 0$ and fix a moment of time $s \in [0,T)$. Based on condition (2) in the lemma, we choose a superdifferential $\gamma_s \in \partial^+_b(m_s)$ so that

$$ \begin{equation*} \int_{\mathbb R^d} \gamma_s(x) \cdot f(x,m_s)\,m_s(dx) \leqslant \varepsilon. \end{equation*} \notag $$
After that we choose $r \in (s,T]$ and $\tau > 0$ so that
$$ \begin{equation*} \xi_s(\tau) \leqslant \varepsilon,\qquad \tau=(r-s) \leqslant \varepsilon. \end{equation*} \notag $$
Here $\xi_s$ is a function from Definition 2.22 satisfying the inequality
$$ \begin{equation} \phi\bigl((\operatorname{Id}+\tau b_s)\sharp m_s\bigr) - \phi(m_s) \leqslant \tau \int_{\mathbb R^d} \gamma_s(x) \cdot b_s(x)\,m_s(dx)+\tau \xi_s(\tau). \end{equation} \tag{3.2} $$

Set

$$ \begin{equation*} b_s(x) \triangleq f(x,m_s) \end{equation*} \notag $$
and consider the difference
$$ \begin{equation*} \phi(m_r) - \phi(m_s) =\bigl[ \phi(m_r) - \phi((\operatorname{Id}+\tau b_s)\sharp m_s) \bigr] +\bigl[ \phi((\operatorname{Id}+\tau b_s)\sharp m_s) - \phi(m_s) \bigr]. \end{equation*} \notag $$

By the choice of $\tau$ we have

$$ \begin{equation*} \phi\bigl((\operatorname{Id}+\tau b_s)\sharp m_s\bigr) - \phi(m_s) \leqslant 2\varepsilon(r-s). \end{equation*} \notag $$

Now we estimate the difference

$$ \begin{equation*} \phi(m_r) - \phi\bigl((\operatorname{Id}+\tau b_s)\sharp m_s\bigr). \end{equation*} \notag $$
With this aim in view, we introduce the notation
$$ \begin{equation*} \beta_s(T,m_*) \triangleq G_1(T,\varsigma_p(m_*))+T \|f(\cdot,m_s)\|_{L_p(\mathbb R^d,m_s;\mathbb R^d)}. \end{equation*} \notag $$
Note that
$$ \begin{equation*} \varsigma_p((\operatorname{Id}+\tau b_s)\sharp m_s) \leqslant G_1(T,\varsigma_p(m_*))+\tau \|f(\cdot,m_s)\|_{L_p(\mathbb R^d,m_s;\mathbb R^d)} \leqslant \beta_s(T,m_*) \end{equation*} \notag $$
by virtue of Proposition 2.1 and
$$ \begin{equation*} \varsigma_p(m_r) \leqslant G_1(T,\varsigma_p(m_*)) \leqslant \beta_s(T,m_*) \end{equation*} \notag $$
due to Proposition 2.15.

We introduce the map

$$ \begin{equation*} \chi_1\colon z \mapsto \biggl( z+(r-s) b_s(z),\ z+\int_{s}^{r} f(\mathsf{X}^{s,z}_{m_{\cdot}}(t),m_t)\,dt \biggr) \end{equation*} \notag $$
and construct a plan by the rule
$$ \begin{equation*} \pi \triangleq \chi_1\sharp m_s. \end{equation*} \notag $$
Then it follows from the local Lipschitz property of $\phi$ that
$$ \begin{equation*} \phi(m_r) - \phi((\operatorname{Id}+\tau b_s)\sharp m_s) \leqslant K_{\beta_s(T,m_*)} W_p\bigl((\operatorname{Id}+\tau b_s)\sharp m_s,m_r\bigr), \end{equation*} \notag $$
while it follows from Definition 2.4 and Proposition 2.16 that
$$ \begin{equation*} \begin{aligned} \, &W_p\bigl((\operatorname{Id}+\tau b_s)\sharp m_s,m_r\bigr) \\ &\qquad\leqslant \biggl( \int_{\mathbb R^d} \biggl\| x+(r-s) b_s(x) - x - \int_{s}^{r} f(\mathsf{X}^{s,x}_{m_{\cdot}}(t),m_s)\,dt \biggr\|^p\,m_s(dx) \biggr)^{1/p}\\ &\qquad\leqslant G_2(T,\varsigma_p(m_*)) (r-s)^2 \leqslant K_{\beta_s(T,m_*)} G_2(T,\varsigma_p(m_*)) \varepsilon (r-s). \end{aligned} \end{equation*} \notag $$
Introducing the notation $\check{K} \triangleq \sup_{s \in [0,T]} K_{\beta_s(T,m_*)}$ we obtain the inequality
$$ \begin{equation} \phi(m_r) - \phi((\operatorname{Id}+\tau b_s)\sharp m_s) \leqslant \check{K} G_2(T,\varsigma_p(m_*)) \varepsilon (r-s). \end{equation} \tag{3.3} $$

Setting $C(T,m_*) \triangleq 2+\check{K} G_2(T,\varsigma_p(m_*))$ and taking account of inequalities (3.2) and (3.3) we deduce the required estimate (3.1).

Next, we consider the set

$$ \begin{equation*} \Theta \triangleq \{ \theta \in [0,T]\colon \phi(m_\theta) - \phi(m_0) \leqslant C(T,m_*) \varepsilon \theta \}. \end{equation*} \notag $$
It is nonempty since $0 \in \Theta$. Set $ f\check{\theta} \triangleq \sup \Theta$. The continuity of $m_{\cdot}$ and $\phi(\,{\cdot}\,)$ implies that $\check{\theta} \in \Theta$. We will show that $\check{\theta}=T$.

Reasoning by contradiction, we assume that $\check{\theta} < T$. Owing to (3.1), there is $\theta \in (\check{\theta},T]$ such that

$$ \begin{equation*} \phi(m_{\theta}) - \phi(m_{\check{\theta}}) \leqslant C(T,m_*) \varepsilon (\theta - \check{\theta}). \end{equation*} \notag $$
By the construction of $\check{\theta}$ we have
$$ \begin{equation*} \phi(m_{\theta}) - \phi(m_0) =(\phi(m_{\theta}) - \phi(m_{\check{\theta}})) - (\phi(m_{\check{\theta}}) - \phi(m_0)) \leqslant C(T,m_*) \varepsilon \theta. \end{equation*} \notag $$
This means that $\theta \in \Theta$ and $\theta > \check{\theta}=\sup \Theta$. This fact contradicts the assumption that $\check{\theta} < T$.

Thus, $\sup \Theta=T \in \Theta$; therefore,

$$ \begin{equation*} \phi(m_T) - \phi(m_0) \leqslant C(T,m_*)\varepsilon T. \end{equation*} \notag $$
Since $\varepsilon$ has been chosen to be arbitrarily small, the previous inequality yields the needed estimate
$$ \begin{equation*} \phi(m_T) - \phi(m_0) \leqslant 0. \end{equation*} \notag $$
Lemma 3.5 is proved.

Let us turn to Theorem 3.4.

Proof of Theorem 3.4. First we note that, by the continuity of $\phi$ at the point $\widehat{m}$, for an arbitrary positive number $\Omega$ there exists $\omega > 0$ such that any $m \in \mathcal{P}_p(\mathbb R^d)$ satisfying the condition $W_p(m,\widehat{m}) < \omega$ also satisfies
$$ \begin{equation*} |\phi(m) - \phi(\widehat{m})| < \Omega. \end{equation*} \notag $$
It follows from the assumptions of the theorem that
$$ \begin{equation*} |\phi(m) - \phi(\widehat{m})|=|\phi(m)|=\phi(m). \end{equation*} \notag $$

Now, to show the stability of the equilibrium $\widehat{m}$ we assume the opposite. More precisely, let $\varepsilon > 0$ be such that for each $\delta > 0$ there exist a measure ${m_* \in \mathcal{P}_p(\mathbb R^d)}$ and time $\widehat T>0$ such that

$$ \begin{equation*} W_p(\widehat{m},m_*) < \delta\quad\text{and}\quad W_p(\widehat{m},m_{\widehat T})=\varepsilon. \end{equation*} \notag $$
Here the measure-valued function $m_{\cdot}$ is a solution of problem (2.1), (2.2).

Without loss of generality we can assume that:

By the choice of $\delta$, $\Omega$, and $\omega$, we have the inequality $\phi(m_*) < \widehat{\phi}$. Since $m_{\cdot}$ is a solution of (2.1) and
$$ \begin{equation*} m_t \in \mathsf{B}_\varepsilon(\widehat{m}) \subseteq \mathsf{B}_R(\widehat{m}) \end{equation*} \notag $$
for $t \in [0,\widehat T]$, the definition of $\widehat{\phi}$ and Lemma 3.5 yield the inequality
$$ \begin{equation*} \widehat{\phi} \leqslant \phi(m_{\widehat T}) \leqslant \phi(m_*) < \widehat{\phi}. \end{equation*} \notag $$
This contradiction proves Theorem 3.4.

We can similarly deduce a theorem concerning Lyapunov’s second method in the case of a subdifferentiable Lyapunov function.

Theorem 3.6 (Lyapunov’s second method for the nonlocal continuity equation). Assume that a measure $\widehat{m} \in \mathcal{P}_p(\mathbb R^d)$ is an equilibrium for (2.1), $f\colon \mathbb R^d \times \mathcal{P}_p(\mathbb R^d) \!\to\! \mathbb R^d$ satisfies Assumption 2.10, and there exists a subdifferentiable Lyapunov function $\phi$ for the equilibrium $\widehat{m}$. Then this equilibrium is stable.

Like in the previous case, the proof of this theorem is based on the following lemma.

Lemma 3.7. Assume that $T>0$, $m_* \in \mathcal{P}_p(\mathbb R^d)$, $m_{\cdot} \in \Gamma_T(\mathcal{P}_p(\mathbb R^d))$ is a solution of the initial value problem (2.1), (2.2), and $\phi\colon \mathcal{P}_p(\mathbb R^d) \to \mathbb{R}_+$ is a locally Lipschitz function satisfying the following conditions:

Then
$$ \begin{equation*} \phi(m_T) - \phi(m_0) \leqslant 0. \end{equation*} \notag $$

Proposition 2.24 and Theorem 3.4 yield the following result directly.

Corollary 3.8. Let $\widehat{m} \in \mathcal{P}_p(\mathbb R^d)$ be an equilibrium for equation (2.1), let $f\colon \mathbb R^d \times \mathcal{P}_p(\mathbb R^d) \to \mathbb R^d$ satisfy Assumption 2.10, and assume that there exists a differentiable Lyapunov function $\phi$ for the equilibrium $\widehat{m}$. Then this equilibrium is stable.

3.1. Example: a gradient flow

Let $v\colon \mathbb R^d \times \mathcal{P}_2(\mathbb R^d) \to \mathbb{R}_+$. Consider the functional

$$ \begin{equation*} \phi(m)=\int_{\mathbb R^d} v(x,m)\,m(dx). \end{equation*} \notag $$
We assume that:

We consider a system of particles with dynamics

$$ \begin{equation*} \dot{x}=-(\nabla_m \phi(m,x))^\top. \end{equation*} \notag $$
It is associated with the continuity equation
$$ \begin{equation} \partial_t m_t - \operatorname{div} \bigl((\nabla_m \phi(m,x))^\top m_t \bigr)=0. \end{equation} \tag{3.4} $$
Note that (see [27], Proposition A.3)
$$ \begin{equation*} \nabla_m \phi(m,x)=\nabla_x v(x,m)+\int_{\mathbb R^d} \nabla_m v(y,m,x)\,m(dy). \end{equation*} \notag $$

We show that $\widehat{m}=\delta_0$ is an equilibrium for equation (3.4), that is,

$$ \begin{equation*} \int_{\mathbb R^d} \nabla_x \varphi(x) \cdot \nabla_m \phi(\widehat{m},x)^\top\,\widehat{m}(dx) =\nabla_x \varphi(0) \cdot \nabla_m \phi(\widehat{m},0)^\top =0 \end{equation*} \notag $$
for any $\varphi \in C^\infty_{\mathrm c}(\mathbb R^d)$. The left-hand side can be rewritten as
$$ \begin{equation*} \begin{aligned} \, &\int_{\mathbb R^d} \nabla_x \varphi(x) \cdot \biggl( \nabla_x v(x,\widehat{m}) +\int_{\mathbb R^d} \nabla_m v(y,\widehat{m},x)\,\widehat{m}(dy) \biggr)^\top\,\widehat{m}(dx) \\ &\qquad= \nabla_x \varphi(0) \cdot \bigl( \nabla_x v(0,\delta_0)+\nabla_m v(0,\delta_0,0) \bigr)^\top =0. \end{aligned} \end{equation*} \notag $$

We verify the assumptions of Theorem 3.4. Owing to Proposition 2.24,

$$ \begin{equation*} \nabla_m \phi(m,\cdot) \in \partial_b\phi(m) \subseteq \partial^+_b\phi(m). \end{equation*} \notag $$
Thus, the functional $\phi$ has the following properties:

$\bullet$ $\phi(\widehat{m})=0$;

$\bullet$ $\inf_{m \in \mathsf{S}_\varepsilon(\widehat{m})}\phi(m) > 0$ for any $\varepsilon > 0$;

$\bullet$ since

$$ \begin{equation*} \begin{aligned} \, \phi(\mu) - \phi(\nu) &=\int_{\mathbb R^d} v(x,\mu)\,\mu(dx) - \int_{\mathbb R^d} v(y,\nu)\,\nu(dy) \\ &=\int_{\mathbb R^d\times\mathbb R^d} (v(x,\mu)-v(y,\nu))\,\pi(dx\,dy) \\ &\leqslant \int_{\mathbb R^d\times\mathbb R^d} l \bigl(\|x-y\|+W_p(\mu,\nu)\bigr)\,\pi(dx\,dy) \\ &\leqslant l \biggl(\int_{\mathbb R^d\times\mathbb R^d} \|x-y\|^p\,\pi(dx\,dy) \biggr)^{1/p} +l W_p(\mu,\nu)\leqslant 2l W_p(\mu,\nu), \end{aligned} \end{equation*} \notag $$
where $\pi \in \Pi_{\mathrm o}(\mu,\nu)$, it follows that $\phi$ is globally Lipschitz continuous;

$\bullet$ the inequality

$$ \begin{equation*} \int_{\mathbb R^d} -\nabla_m \phi(m,x) \cdot \nabla_m \phi(m,x)^\top\,m(dx) =- \|\nabla_m \phi(m,\cdot)\|_{L_2(\mathbb R^d,m;\mathbb{R}^{d*})}^2 \leqslant 0 \end{equation*} \notag $$
holds.

Therefore, the equilibrium $\widehat{m}$ and the function $\phi$ satisfy the assumptions of Theorem 3.4. This implies the stability of $\widehat{m}$.

§ 4. Stability of systems with linear vector fields

In this section we assume that $p=2$. In what follows we need some additional assumptions on the dynamics $f$ and the equilibrium $\widehat{m}$, which we present below.

Assumption 4.1. The function $f(x,m)$ is linear and has separated variables, that is,

$$ \begin{equation*} f(x,m)=Ax+\int_{\mathbb R^d} By\,m(dy), \end{equation*} \notag $$
where $A$ and $B$ are constant $d \times d$ matrices.

Assumption 4.2. The measure $\widehat{m}$ is absolutely continuous with respect to the Lebesgue measure.

When Assumption 4.2 holds, it follows from [22], Theorem 6.2.4, that there exists a unique optimal transport $P$ between $\widehat{m}$ and any measure $m \in \mathcal{P}_2(\mathbb R^d)$. As a consequence, there exists an optimal plan of the form $\pi=(\operatorname{Id},P) \sharp \widehat{m} \in \Pi_{\mathrm o}(\widehat{m},m)$.

As a Lyapunov function we take one-half of the squared distance to the equilibrium, namely,

$$ \begin{equation*} \phi(m) \triangleq \frac12 W_2^2(m,\widehat{m}). \end{equation*} \notag $$
First we find the barycentric superdifferential of the function $\phi$ introduced above.

Proposition 4.3. Let $\widehat{m},m \in \mathcal{P}_2(\mathbb R^d)$ and $\pi \in \Pi_{\mathrm o}(\widehat{m},m)$. Then the function

$$ \begin{equation*} \widehat{\pi}^1(x) \triangleq \int_{\mathbb R^d} (x-\widehat{x})^\top\,\pi(d\widehat{x}\mid x) \end{equation*} \notag $$
is the barycentric superdifferential of $\phi$ at $m$.

Proof. We derive from [22], Theorem 10.2.2, that for any triplet of measures $\mu^1,\mu^2,\mu^3 \in \mathcal{P}_2(\mathbb R^d)$ and any measure $\boldsymbol{\mu} \in \mathcal{P}(\mathbb R^d \times \mathbb R^d \times \mathbb R^d)$ such that
$$ \begin{equation*} \mathfrak{p}^{1,2}\sharp\boldsymbol{\mu} \in \Pi_{\mathrm o}(\mu^1,\mu^2)\quad\text{and}\quad \mathfrak{p}^{3}\sharp\boldsymbol{\mu} =\mu^3 \end{equation*} \notag $$
the inequality
$$ \begin{equation} \begin{aligned} \, \notag \phi(\mu^3)-\phi(\mu^1) &\leqslant \int_{(\mathbb R^d)^3} (x_1-x_2)^\top (x_3-x_1)\,\boldsymbol{\mu}(dx_1 \,dx_2\, dx_3) \\ &\qquad +\int_{(\mathbb R^d)^3} \|x_3 - x_1\|^2\,\boldsymbol{\mu}(dx_1\, dx_2\, dx_3) \end{aligned} \end{equation} \tag{4.1} $$
holds. We take arbitrary $\tau > 0$ and $b \in L_2(\mathbb R^d,\widehat{m};\mathbb R^d)$ and the measures
$$ \begin{equation*} \mu^1 =m,\qquad \mu^2 =\widehat{m},\qquad \mu^3 =(\operatorname{Id}+\tau b) \sharp m \end{equation*} \notag $$
and
$$ \begin{equation*} \boldsymbol{\mu} =(\mathfrak{p}^2,\mathfrak{p}^1,\mathfrak{p}^2+\tau b \circ \mathfrak{p}^2) \sharp \pi. \end{equation*} \notag $$
We redenote the variables as follows:
$$ \begin{equation*} x =x_1\quad\text{and}\quad \widehat{x} =x_2. \end{equation*} \notag $$
By the definition of the push forward measure, the right-hand side of (4.1) can be written as
$$ \begin{equation*} \begin{aligned} \, &\int_{(\mathbb R^d)^2} (x-\widehat{x})^\top \tau b(x)\,\pi(d\widehat{x} \,dx) +\int_{\mathbb R^d} \|\tau b(x)\|^2\,\pi(d\widehat{x}\, dx) \\ &\qquad =\int_{(\mathbb R^d)^2} (x-\widehat{x})^\top \tau b(x)\,\pi(d\widehat{x}\, dx) +\tau^2 \|b\|_{L_2(\mathbb R^d,m;\mathbb R^d)}^2. \end{aligned} \end{equation*} \notag $$
Disintegrating the measure $\pi$ with respect to the variable $x$ we obtain
$$ \begin{equation*} \begin{aligned} \, &\int_{\mathbb R^d} \int_{\mathbb R^d} (x-\widehat{x})^\top \tau b(x)\,\pi(d\widehat{x}\mid x)\,m(dx) +\tau^2 \|b\|_{L_2(\mathbb R^d,m;\mathbb R^d)}^2 \\ &\qquad =\int_{\mathbb R^d} \biggl( \int_{\mathbb R^d} (x-\widehat{x})^\top\,\pi(d\widehat{x}\mid x) \tau b(x) \biggr)\, m(dx) +\tau^2 \|b\|_{L_2(\mathbb R^d,m;\mathbb R^d)}^2. \end{aligned} \end{equation*} \notag $$
We set
$$ \begin{equation*} \xi(\tau) \triangleq \tau \|b\|_{L_2(\mathbb R^d,m;\mathbb R^d)}^2. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} \phi((\operatorname{Id}+\tau b) \sharp m) - \phi(m) \leqslant \tau \int_{\mathbb R^d} \widehat{\pi}^1(x) b(x)\,m(dx) +\tau \xi(\tau). \end{equation*} \notag $$
The proposition is proved.

Now we state the main result in this section, namely, an analogue of Lyapunov’s method for establishing the stability of equilibria of systems with linear vector fields.

Theorem 4.4 (stability of a system with a linear vector field). Assume that $f\colon \mathbb R^d \times \mathcal{P}_2(\mathbb R^d)$ satisfies Assumptions 2.10 and 4.1 and $\widehat{m} \in \mathcal{P}_2(\mathbb R^d)$ is an equilibrium satisfying Assumption 4.2. Also assume that

$$ \begin{equation} \int_{\mathbb R^d} \xi(\widehat{x})^\top \, A \,\xi(\widehat{x})\, \widehat{m}(d\widehat{x})+\int_{\mathbb R^d} \xi(\widehat{x})^\top\,\widehat{m}(d\widehat{x}) \, B \,\int_{\mathbb R^d} \xi(\widehat{y})\,\widehat{m}(d\widehat{y}) \leqslant 0 \end{equation} \tag{4.2} $$
for each $\xi \in \operatorname{Tan}(\widehat{m})\setminus\{0\}$. Then the equilibrium $\widehat{m}$ is stable.

To prove Theorem 4.4 we need two auxiliary results.

Lemma 4.5. Assume that $\widehat{m} \in \mathcal{P}_2(\mathbb R^d)$ satisfies Assumption 4.2. Then

$$ \begin{equation} \int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top f(x,m)\,\pi(d\widehat{x}\, dx) =\int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top \bigl(f(x,m)-f(\widehat{x},\widehat{m})\bigr)\, \pi(d\widehat{x}\, dx) \end{equation} \tag{4.3} $$
for an arbitrary measure $m \!\in\! \mathcal{P}_2(\mathbb R^d)$ and an arbitrary function ${f\colon \mathbb R^d \!\times\! \mathcal{P}_2(\mathbb R^d) \mkern-1mu\!\to\!\mkern-1mu \mathbb R^d}$, where $\pi=(\operatorname{Id},P)\sharp \widehat{m} \in \Pi_{\mathrm o}(\widehat{m},m)$.

Proof. Adding and subtracting $\displaystyle\int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top f(\widehat{x},\widehat{m})\, \pi(d\widehat{x} \,dx)$ on the left-hand side of (4.3) we obtain
$$ \begin{equation} \begin{aligned} \, \notag \int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top f(x,m)\,\pi(d\widehat{x}\, dx) &=\int_{\mathbb R^d\times\mathbb R^d} (x\,{-}\,\widehat{x})^\top \bigl(f(x,m)\,{-}\,f(\widehat{x},\widehat{m})\bigr)\, \pi(d\widehat{x}\, dx) \\ &\qquad +\int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top f(\widehat{x},\widehat{m})\, \pi(d\widehat{x}\, dx). \end{aligned} \end{equation} \tag{4.4} $$
We consider the second term. By virtue of the definition of $\pi$, we have
$$ \begin{equation*} \int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top f(\widehat{x},\widehat{m})\,\pi(d\widehat{x}\, dx) =\int_{\mathbb R^d} (P(\widehat{x})-\widehat{x})^\top f(\widehat{x},\widehat{m})\, \widehat{m}(d\widehat{x}). \end{equation*} \notag $$
It follows from Proposition 8.5.2 of [22], that
$$ \begin{equation} (P-\operatorname{Id}) \in \operatorname{Tan}(\widehat{m}), \end{equation} \tag{4.5} $$
that is, there exists a sequence $\varphi_n \in C^\infty_{\mathrm c}(\mathbb R^d)$ such that
$$ \begin{equation*} \nabla\varphi_n \to (P-\operatorname{Id}) \text{ in the space } L_2(\mathbb R^d,\widehat{m};\mathbb R^d) \text{ as } n \to \infty. \end{equation*} \notag $$
Since $\widehat{m}$ is an equilibrium, it is true that
$$ \begin{equation*} \int_{\mathbb R^d} \nabla\varphi_n(\widehat{x}) f(\widehat{x},\widehat{m})\, \widehat{m}(d\widehat{x})=0. \end{equation*} \notag $$
Taking the limit we arrive at the equality
$$ \begin{equation*} \int_{\mathbb R^d} (P(\widehat{x})-\widehat{x})^\top f(\widehat{x},\widehat{m})\, \widehat{m}(d\widehat{x})=0. \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} \int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top f(\widehat{x},\widehat{m})\,\pi(d\widehat{x} \, dx)=0. \end{equation*} \notag $$
This fact and (4.4) yield the assertion of the lemma.

Lemma 4.6. Assume that $f\colon \mathbb R^d \times \mathcal{P}_2(\mathbb R^d) \to \mathbb R^d$ satisfies Assumptions 2.10 and 4.1 and $\widehat{m} \in \mathcal{P}_2(\mathbb R^d)$ satisfies Assumption 4.2. Also assume that (4.2) holds for each $\xi \in \operatorname{Tan}(\widehat{m})\setminus\{0\}$. Then

$$ \begin{equation} \int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top \bigl(f(x,m)-f(\widehat{x},\widehat{m})\bigr)\, \pi(d\widehat{x} \,dx) \leqslant 0 \end{equation} \tag{4.6} $$
for all $m \in \mathcal{P}_2(\mathbb R^d)$, where $\pi=(\operatorname{Id},P)\sharp \widehat{m} \in \Pi_{\mathrm o}(\widehat{m},m)$.

Proof. Below integration of vector quantities is understood in the componentwise sense. We consider the integrand
$$ \begin{equation} \begin{aligned} \, \notag &(x-\widehat{x})^\top \bigl(f(x,m)-f(\widehat{x},\widehat{m})\bigr) \\ &\qquad=(x-\widehat{x})^\top \bigl(f(x,m)-f(\widehat{x},m)\bigr) +(x-\widehat{x})^\top \bigl(f(\widehat{x},m)-f(\widehat{x},\widehat{m})\bigr) \end{aligned} \end{equation} \tag{4.7} $$
on the left-hand side of (4.6). We examine the terms on the right-hand side separately. According to Lagrange’s theorem, the first term has the form
$$ \begin{equation*} \begin{aligned} \, (x-\widehat{x})^\top \bigl(f(x,m)-f(\widehat{x},m)\bigr) &=(x-\widehat{x})^\top \int_0^1 \nabla_x f(\widehat{x}+q (x-\widehat{x}),m)\,dq \cdot (x-\widehat{x}) \\ &=(x-\widehat{x})^\top \, A \, (x-\widehat{x}). \end{aligned} \end{equation*} \notag $$
Proposition 2.21 implies that the second term has the form
$$ \begin{equation*} \begin{aligned} \, &(x-\widehat{x}) \cdot \bigl(f(\widehat{x},m)-f(\widehat{x},\widehat{m})\bigr) \\ &\qquad=(x-\widehat{x})^\top \int_0^1 \int_{\mathbb R^d\times\mathbb R^d} \int_0^1 \nabla_m f(\widehat{x},(1-s)\widehat{m}+sm,\widehat{y}+q(y-\widehat{y}))\,dq \\ &\qquad\qquad\qquad\qquad\qquad \times (y-\widehat{y})\,\pi(d\widehat{y}\, dy)\,ds \\ &\qquad=\int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top \, B \, (y-\widehat{y})\,\pi(d\widehat{y}\, dy). \end{aligned} \end{equation*} \notag $$
The latter transformation is valid by Proposition A.2 of [27].

Hence, integrating (4.7) with respect to $\pi(d\widehat{x} \,dx)$ we derive the relation

$$ \begin{equation*} \begin{aligned} \, &\int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top \bigl(f(x,m)-f(\widehat{x},\widehat{m})\bigr)\, \pi(d\widehat{x}\, dx) \\ &=\int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top \, A \, (x-\widehat{x})\,\pi(d\widehat{x}\, dx) \\ &\qquad+\int_{\mathbb R^d} (x-\widehat{x})^\top\,\pi(d\widehat{x}\, dx) \, B \, \int_{\mathbb R^d} (y-\widehat{y})\,\pi(d\widehat{y}\, dy). \end{aligned} \end{equation*} \notag $$
Using the form of $\pi$ and the optimality of the transport $P$, we infer the equality
$$ \begin{equation*} \begin{aligned} \, & \int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top \bigl(f(x,m)-f(\widehat{x},\widehat{m})\bigr)\, \pi(d\widehat{x}\, dx) \\ &\qquad =\int_{\mathbb R^d} (P(\widehat{x})-\widehat{x})^\top \, A \,(P(\widehat{x})-\widehat{x})\, \widehat{m}(d\widehat{x}) \\ &\qquad\qquad +\int_{\mathbb R^d} (P(\widehat{x})-\widehat{x})^\top\,\widehat{m}(d\widehat{x}) \, B \, \int_{\mathbb R^d} (P(\widehat{y})-\widehat{y})\,\widehat{m}(d\widehat{y}). \end{aligned} \end{equation*} \notag $$
By (4.2) and (4.5) the expression on the right-hand side is nonpositive. The lemma is proved.

Now we switch to the proof of the theorem.

Proof of Theorem 4.4. We verify that the assumptions in Theorem 3.4 are satisfied and show that
$$ \begin{equation*} \phi(m)=\frac12 W_2(m,\widehat{m}) \end{equation*} \notag $$
is a superdifferentiable Lyapunov function. Owing to the assumptions of Theorem 4.4, it suffices to verify only the condition on the superdifferential of $\phi$.

According to Proposition 4.3 for the function $\phi(\,{\cdot}\,)=W_2(\cdot,\widehat{m})$, $\partial^+_b\phi(m)$ contains $\widehat{\pi}^1(\,{\cdot}\,)$ for $\pi \in \Pi_{\mathrm o}(m,\widehat{m})$. We consider the integral

$$ \begin{equation*} \int_{\mathbb R^d} \widehat{\pi}^1(x) \cdot f(x,m)\,m(dx) \end{equation*} \notag $$
for $m \in \mathcal{P}_2(\mathbb R^d)$. We write $\widehat{\pi}^1$ as follows:
$$ \begin{equation*} \begin{aligned} \, \int_{\mathbb R^d} \widehat{\pi}^1(x) \cdot f(x,m)\,m(dx) &=\int_{\mathbb R^d} \biggl( \int_{\mathbb R^d} (x-\widehat{x})^\top\,\pi(d\widehat{x}\mid x) f(x,m) \biggr)\, m(dx) \\ &=\int_{\mathbb R^d} \int_{\mathbb R^d} (x-\widehat{x})^\top f(x,m)\,\pi(d\widehat{x}\mid x)\,m(dx) \\ &=\int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top f(x,m)\,\pi(d\widehat{x}\, dx). \end{aligned} \end{equation*} \notag $$
Lemmas 4.5 and 4.6 yield
$$ \begin{equation*} \int_{\mathbb R^d} \widehat{\pi}^1(x) f(x,m)\,m(dx) =\int_{\mathbb R^d\times\mathbb R^d} (x-\widehat{x})^\top (f(x,m)-f(\widehat{x},\widehat{m}))\, \pi(d\widehat{x}\, dx) \leqslant 0. \end{equation*} \notag $$
So the equilibrium $\widehat{m}$ is stable by Theorem 3.4. Theorem 4.4 is proved.

4.1. Example: a system of connected simple pendulums

Let $d=2n$ and $x_1,x_2 \in \mathbb{R}^n$. We introduce the following notation:

where $\mathbb{O}$ and $\mathbb{I}$ are the zero and identity $n \times n$ matrices, respectively. Note that
$$ \begin{equation*} Ax=\begin{pmatrix} H_1(x)\\ -H_2(x) \end{pmatrix} =\begin{pmatrix} x_2\\ -x_1 \end{pmatrix}. \end{equation*} \notag $$
In the case when $n > 1$, a vector composed of multidimensional quantities is interpreted as a vertical concatenation, squaring as taking the scalar square, $H_1$ ($H_2$, respectively) denotes the vector of partial derivatives of $H$ with respect to the components of $x_1$ ($x_2$, respectively).

Consider the system

$$ \begin{equation} \dot{x}=Ax+\int_{\mathbb R^d} By\,m(dy) \triangleq f(x,m), \end{equation} \tag{4.8} $$
where $B$ is a negative definite matrix. Note that the first term in this equation describes the free motion of a pendulum, whereas the second specifies the effect of all other pendulums on the distinguished one. Thus, (4.8) describes a system of connected simple pendulums. This system is associated with the continuity equation
$$ \begin{equation} \partial_t m_t+\operatorname{div} \biggl( \biggl(Ax+\int_{\mathbb R^d} By\,m_t(dy)\biggr) m_t \biggr) =0. \end{equation} \tag{4.9} $$

Let $\widehat{m}$ be the Gibbs measure generated by the Hamiltonian $H(x)$. Recall that it has the density

$$ \begin{equation*} g(x) \triangleq \frac1{Z(\beta)}e^{-\beta H(x)}, \end{equation*} \notag $$
where $Z(\beta)$ is a normalizing coefficient, and that its first moment is zero. We show that $\widehat{m}$ is an equilibrium for equation (4.9), that is,
$$ \begin{equation*} \int_{\mathbb R^d} \nabla_x \varphi(x) \cdot \biggl( Ax+\int_{\mathbb R^d} By\,\widehat{m}(dy) \biggr)\, \widehat{m}(dx) =0. \end{equation*} \notag $$
By the above properties of the Gibbs measure, it suffices to prove that
$$ \begin{equation} \int_{\mathbb R^d} \nabla_x (\varphi(x) \cdot Ax) g(x)\,dx =0. \end{equation} \tag{4.10} $$
Integrating the left-hand side by parts, we arrive at the relation
$$ \begin{equation*} \int_{\mathbb R^d} (\nabla_x \varphi(x) \cdot Ax) g(x)\,dx =\int_{\mathbb R^d} \operatorname{div}_x ( Ax \cdot g(x)) \varphi(x)\,dx. \end{equation*} \notag $$
The Gibbs measure is invariant with respect to the Hamiltonian dynamics; therefore, its density satisfies Liouville’s equation (see, for example, [28]):
$$ \begin{equation*} \operatorname{div}_x ( Ax \cdot g(x))=0. \end{equation*} \notag $$
It follows that (4.10) holds.

Now we verify the assumptions of Theorem 4.4. For an arbitrary nonzero integrable function $\xi\colon \mathbb R^d \to \mathbb R^d$ we have

$$ \begin{equation*} \begin{aligned} \, &\int_{\mathbb R^d} \xi(\widehat{x})^\top A \xi(\widehat{x})\,\widehat{m}(d\widehat{x}) +\int_{\mathbb R^d} \xi(\widehat{x})^\top\,\widehat{m}(d\widehat{x}) B \int_{\mathbb R^d} \xi(\widehat{y})\,\widehat{m}(d\widehat{y}) \\ &\qquad =\int_{\mathbb R^d} \xi(\widehat{x})^\top\,\widehat{m}(d\widehat{x}) B \int_{\mathbb R^d} \xi(\widehat{y})\,\widehat{m}(d\widehat{y}). \end{aligned} \end{equation*} \notag $$
Since the matrix $B$ is negative definite, this expression is negative.

Consequently, the equilibrium $\widehat{m}$ and the function $f$ satisfy the assumptions of Theorem 4.4. This implies the stability of $\widehat{m}$.

§ 5. Auxiliary assertions

5.1. The properties of trajectories of solutions of the nonlocal continuity equation

Proof of Proposition 2.15. By the definition of a solution of (2.3),
$$ \begin{equation*} \mathsf{X}^{0,x_*}_{m_{\cdot}}(t) =x_* +\int_0^t f(\mathsf{X}^{0,x_*}_{m_{\cdot}}(\tau),m_\tau)\,d\tau. \end{equation*} \notag $$
Calculating the $L_p(\mathbb R^d,m_*;\mathbb R^d)$-norms of both sides and applying Minkowski’s inequality to the right-hand side twice, we arrive at the estimate
$$ \begin{equation} \begin{aligned} \, \notag &\biggl( \int_{\mathbb R^d} \|\mathsf{X}^{0,x_*}_{m_{\cdot}}(t)\|^p\,m_*(dx_*) \biggr)^{1/p} \leqslant \biggl( \int_{\mathbb R^d} \|x_*\|^p\,m_*(dx_*) \biggr)^{1/p} \\ &\qquad\qquad +\int_0^t \biggl( \int_{\mathbb R^d} \|f(\mathsf{X}^{0,x_*}_{m_{\cdot}}(\tau),m_\tau)\|^p\, m_*(dx_*)\biggr)^{1/p} d\tau. \end{aligned} \end{equation} \tag{5.1} $$
It follows from Remark 2.11 and the triangle inequality that
$$ \begin{equation*} \begin{aligned} \, &\biggl( \int_{\mathbb R^d} \|f(\mathsf{X}^{0,x_*}_{m_{\cdot}}(\tau),m_\tau)\|^p\,m_*(dx_*)\biggr)^{1/p} \\ &\qquad \leqslant \biggl( \int_{\mathbb R^d} C_1^p(1+\|\mathsf{X}^{0,x_*}_{m_{\cdot}}(\tau)\|+\varsigma_p(m_\tau))^p\, m_*(dx_*)\biggr)^{1/p} \\ &\qquad \leqslant C_1 \biggl(1+\biggl( \int_{\mathbb R^d} \|\mathsf{X}^{0,x_*}_{m_{\cdot}}(\tau)\|^p\,m_*(dx_*) \biggr)^{1/p}+\varsigma_p(m_\tau) \biggr). \end{aligned} \end{equation*} \notag $$

Consider the map

$$ \begin{equation*} \chi_2\colon z \mapsto \mathsf{X}^{0,z}_{m_{\cdot}}(s). \end{equation*} \notag $$
Then using Proposition 8.1.8 in [22] we obtain the relations
$$ \begin{equation*} \begin{aligned} \, \biggl( \int_{\mathbb R^d} \|\mathsf{X}^{0,x_*}_{m_{\cdot}}(s)\|^p\,m_*(dx_*) \biggr)^{1/p} &=\biggl( \int_{\mathbb R^d} \|x_*\|^p\,(\chi_2\sharp m_*)(dx_*) \biggr)^{1/p} \\ & =\biggl( \int_{\mathbb R^d} \|x_*\|^p\,m_s(dx_*) \biggr)^{1/p} =\varsigma_p(m_s) \end{aligned} \end{equation*} \notag $$
for an arbitrary moment of time $s$.

Thus, inequality (5.1) takes the form

$$ \begin{equation*} \varsigma_p(m_t) \leqslant \varsigma_p(m_*) +C_1 t+\int_0^t 2 C_1 \varsigma_p(m_\tau)\,d\tau. \end{equation*} \notag $$
Finally, using the Gronwall–Bellman inequality in the integral form and the conditions imposed on $\alpha$ and $T$ in Proposition 2.15, we conclude that
$$ \begin{equation*} \varsigma_p(m_t) \leqslant (C_1 T+\alpha) e^{2 C_1 T}. \end{equation*} \notag $$
The last estimate depends only on the constant $\alpha$ and the terminal time $T$.

Proposition 5.1. Assume that $T > 0$, $m_*$ and $\alpha$ are such that $\varsigma_p(m_*) \leqslant \alpha$, and $m_{\cdot} \in \Gamma_T(\mathcal{P}_p(\mathbb R^d))$ is a solution of the initial value problem (2.1), (2.2). Then there exists a function $G_3(T)$ such that

$$ \begin{equation*} \|\mathsf{X}^{s,z}_{m_{\cdot}}(r) - z\| \leqslant G_3(T) (1+\|z\|+G_1(T,\alpha)) (r-s) \end{equation*} \notag $$
for any $0 \leqslant s \leqslant r \leqslant T$ and $z \in \mathbb R^d$.

Proof. By the definition of a solution we have
$$ \begin{equation*} \mathsf{X}^{s,z}_{m_{\cdot}}(r) =z+\int_{s}^{r} f(\mathsf{X}^{s,z}_{m_{\cdot}}(\tau),m_\tau)\, d\tau. \end{equation*} \notag $$
Regrouping the terms, calculating the norm of both sides, and using Remark 2.11, we infer the relations
$$ \begin{equation*} \begin{aligned} \, \|\mathsf{X}^{s,z}_{m_{\cdot}}(r) - z\| &=\biggl\| \int_{s}^{r} f(\mathsf{X}^{s,z}_{m_{\cdot}}(\tau),m_\tau)\, d\tau \biggr\| \\ &\leqslant \int_{s}^{r} \| f(\mathsf{X}^{s,z}_{m_{\cdot}}(\tau),m_\tau)\| \,d\tau \leqslant C_1 \int_{s}^{r} (1+\|\mathsf{X}^{s,z}_{m_{\cdot}}(\tau)\|+\varsigma(m_\tau)) \,d\tau. \end{aligned} \end{equation*} \notag $$
We add and subtract $z$ to/from $\mathsf{X}^{s,z}_{m_{\cdot}}(\tau)$ on the right-hand side; then using the triangle inequality and Proposition 2.15 we deduce the inequalities
$$ \begin{equation*} \begin{aligned} \, \|\mathsf{X}^{s,z}_{m_{\cdot}}(r) - z\| &\leqslant C_1 \int_{s}^{r} (1+\|z\|+\varsigma(m_\tau))\, d\tau+C_1 \int_{s}^{r} \|\mathsf{X}^{s,z}_{m_{\cdot}}(\tau)-z\|\, d\tau \\ &\leqslant C_1 (1+\|z\|+G_1(T,\alpha)) (r-s)+C_1 \int_{s}^{r} \|\mathsf{X}^{s,z}_{m_{\cdot}}(\tau)-z\| \,d\tau. \end{aligned} \end{equation*} \notag $$
Applying the Gronwall–Bellman inequality in the integral form we obtain
$$ \begin{equation*} \begin{aligned} \, \|\mathsf{X}^{s,z}_{m_{\cdot}}(r) - z\| &\leqslant C_1 (1+\|z\|+G_1(T,\alpha)) (r-s) \exp\biggl(\int_{s}^{r} C_1\,d\tau\biggr) \\ &\leqslant C_1 e^{C_1 T} (1+\|z\|+G_1(T,\alpha)) (r-s). \end{aligned} \end{equation*} \notag $$
The first two factors in the last estimate depend only on the terminal time $T$. The proposition is proved.

Proposition 5.2. Assume that $T > 0$, $m_*$ and $\alpha$ are such that $\varsigma_p(m_*) \leqslant \alpha$, and $m_{\cdot} \in \Gamma_T(\mathcal{P}_p(\mathbb R^d))$ is a solution of the initial value problem (2.1), (2.2). Then there is a function $G_4(T,\alpha)$ such that

$$ \begin{equation*} W_p(m_s,m_r) \leqslant G_4(T,\alpha)(r-s) \end{equation*} \notag $$
for any $0 \leqslant s \leqslant r \leqslant T$.

Proof. By definition we have
$$ \begin{equation*} W_p(m_s,m_r) =\biggl( \inf_{\pi \in \Pi(m_s,m_r)} \int_{\mathbb R^d} \|x - y\|^p\,\pi(dx\,dy) \biggr)^{1/p}. \end{equation*} \notag $$
We introduce the mapping
$$ \begin{equation*} \chi_3\colon z \mapsto (z,\,\mathsf{X}^{s,z}_{m_{\cdot}}(r)) \end{equation*} \notag $$
and choose the plan
$$ \begin{equation*} \pi=\chi_3\sharp m_s. \end{equation*} \notag $$
Then
$$ \begin{equation*} \begin{aligned} \, W_p(m_s,m_r) &\leqslant \biggl( \int_{\mathbb R^d} \|x - y\|^p\,(\chi_3\sharp m_s)(dx\,dy) \biggr)^{1/p} \\ &=\biggl( \int_{\mathbb R^d} \|z - \mathsf{X}^{s,z}_{m_{\cdot}}(r)\|^p\,m_s(dz) \biggr)^{1/p}. \end{aligned} \end{equation*} \notag $$
By Propositions 2.15 and 5.1 and Minkowski’s inequality
$$ \begin{equation*} W_p(m_s,m_r) \leqslant G_3(T) (r-s) (1+\varsigma_p(m_s)+G_1(T,\alpha)) \leqslant G_3(T) (r-s) (1+2G_1(T,\alpha)). \end{equation*} \notag $$
The latter bound depends only on the constant $\alpha$ and the terminal time $T$. The proposition is proved.
Proof of Proposition 2.16. According to Assumption 2.10 and Propositions 5.1 and 5.2, we have
$$ \begin{equation*} \begin{aligned} \, &\|f(\mathsf{X}^{s,x}_{m_{\cdot}}(r),m_r) - f(x,m_s)\| \leqslant C_0 (\|\mathsf{X}^{s,x}_{m_{\cdot}}(r) - x\|+W_p(m_s,m_r)) \\ &\qquad \leqslant C_0\bigl(G_3(T) (1+\|x\|+G_1(T,\alpha))+G_4(T,\alpha)\bigr) (r-s). \end{aligned} \end{equation*} \notag $$
Calculating the $L_p(\mathbb R^d,m_s;\mathbb R^d)$-norms of both sides and then applying Minkowski’s inequality and Proposition 2.15 to the right-hand side we obtain
$$ \begin{equation*} \begin{aligned} \, &\biggl( \int_{\mathbb R^d} \|f(\mathsf{X}^{s,x}_{m_{\cdot}}(r),m_r) - f(x,m_s)\|^p\,m_s(dx) \biggr)^{1/p} \\ &\qquad\leqslant C_0 \bigl( G_3(T) (1+\varsigma_p(m_s)+G_1(T,\alpha))+ G_4(T,\alpha)\bigr) (r-s) \\ &\qquad\leqslant C_0 \bigl( G_3(T) (1+2G_1(T,\alpha))+G_4(T,\alpha) \bigr) (r-s). \end{aligned} \end{equation*} \notag $$
The latter bound depends only on the constant $\alpha$ and the terminal time $T$. The proposition is proved.

5.2. The properties of derivatives and superdifferentials in the space of probability measures

5.2.1. The properties of the intrinsic derivative

Proof of Proposition 2.21. Replacing the variable of integration in the second term on the left-hand side of (2.5) by $y_*$, we rewrite it in the form
$$ \begin{equation*} \int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y)\,m(dy) - \int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y_*)\,m_*(dy_*). \end{equation*} \notag $$
We transform the first term as follows:
$$ \begin{equation*} \begin{aligned} \, & \int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y)\,m(dy) \\ &\qquad =\int_{\mathbb R^d} \int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y)\,\pi(dy_*\,|\,y)\,m(dy) \\ &\qquad=\int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y)\,\pi(dy_*\,dy). \end{aligned} \end{equation*} \notag $$
We transform the second term as follows:
$$ \begin{equation*} \begin{aligned} \, &\int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y_*)\,m_*(dy_*) \\ &\qquad=\int_{\mathbb R^d} \int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y_*)\,\pi(dy\,|\,y_*)\,m_*(dy_*) \\ &\qquad=\int_{\mathbb R^d} \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y_*)\,\pi(dy_*\,dy). \end{aligned} \end{equation*} \notag $$
Hence the left-hand side of (2.5) can be rewritten as
$$ \begin{equation*} \int_{\mathbb R^d\times\mathbb R^d} \biggl( \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y) - \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y_*) \biggr)\,\pi(dy_*\,dy). \end{equation*} \notag $$
By Lagrange’s theorem
$$ \begin{equation*} \begin{aligned} \, &\frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y) - \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y_*) \\ &\qquad=\int_0^1 \nabla_y \frac{\delta \Phi}{\delta m}((1-s)m_*+sm,y_*+q(y-y_*))\,dq (y-y_*) \\ &\qquad=\int_0^1 \nabla_m \Phi((1-s)m_*+sm,y_*+q(y-y_*))\,dq (y-y_*). \end{aligned} \end{equation*} \notag $$
Substituting this expression into the previous formula yields the assertion of Proposition 2.21.
Proof of Proposition 2.24. By [21], Proposition 2.2.3, we have
$$ \begin{equation*} \lim_{\tau \to 0} \frac{\phi((\operatorname{Id}+\tau b)\sharp m) - \phi(m)}\tau =\int_{\mathbb R^d} \nabla_m \phi(m,y) b(y)\,m(dy). \end{equation*} \notag $$
This is equivalent to the relation
$$ \begin{equation*} \frac{\phi((\operatorname{Id}+\tau b)\sharp m) - \phi(m)}\tau =\int_{\mathbb R^d} \nabla_m \phi(m,y) b(y)\,m(dy)+\xi(\tau), \end{equation*} \notag $$
where $\xi\colon \mathbb{R} \to \mathbb{R}$ is such that $\xi(\tau) \to 0$ as $\tau \to 0$. Then
$$ \begin{equation*} \phi((\operatorname{Id}+\tau b)\sharp m) - \phi(m) =\int_{\mathbb R^d} \nabla_m \phi(m,y) \tau b(y)\,m(dy)+\tau \xi(\tau). \end{equation*} \notag $$
The proposition is proved.

5.2.2. The strong Fréchet superdifferential and barycentric superdifferential

We assume in this subsection that:

We introduce the notation

$$ \begin{equation*} \mathcal{P}_{pq}(X \times X) \triangleq \{ \nu \in \mathcal{P}(X \times X)\colon \mathfrak{p}^1\sharp\nu \in \mathcal{P}_p(X), \mathfrak{p}^2\sharp\nu \in \mathcal{P}_q(X) \}. \end{equation*} \notag $$

Definition 5.3. We say that $\boldsymbol{\mu} \in \mathcal{P}(X \times X \times X)$ is a 3-plan for measures $\nu \in \mathcal{P}_{pq}(X \times X)$ and $\mu^3 \in \mathcal{P}_p(X)$ if

$$ \begin{equation*} \mathfrak{p}^{1}\sharp\nu =\mu_1,\qquad \mathfrak{p}^{1,2}\sharp\boldsymbol{\mu} =\nu\quad\text{and} \quad \mathfrak{p}^{1,3}\sharp\boldsymbol{\mu} \in \Pi(\mu_1,\mu_3). \end{equation*} \notag $$
The set of all 3-plans is denoted by $\Pi^3(\nu,\mu^3)$.

Definition 5.4 (see [22], Definition 10.3.1). Let $\mu_1 \in \mathcal{P}_p(X)$, and let $\phi\colon {\mathcal{P}_p(X) \!\to\! \mathbb{R}}$ be an upper semicontinuous functional. A measure $\alpha \in \mathcal{P}_{pq}(X \times X)$ is said to belong to the strong Fréchet superdifferential $\partial^+\phi(\mu^1)$ of $\phi$ at $\mu^1$ if for any measure $\mu^3 \mathcal{P}_p(X)$ and any 3-plan $\boldsymbol{\mu} \in \Pi^3(\alpha,\mu^3)$ there exists a function $\zeta\colon \mathbb{R}_+\to \mathbb{R}$ such that:

Proposition 5.5. Assume that $m \in \mathcal{P}_2(X)$ and a function $\phi\colon {\mathcal{P}_2(X) \to \mathbb{R}}$ has a nonempty strong Fréchet superdifferential $\partial^+\phi(m)$. Then the barycentre $\displaystyle \int_X x_2\alpha(dx_2\,|\, x_1)$ of any $\alpha \in \partial^+\phi(m)$ belongs to the barycentric superdifferential $\vphantom{\rule{0pt}{12pt}}\partial^+_b\phi(m)$.

Proof. Let $\alpha \in \partial^+\phi(m)$. Set
$$ \begin{equation*} \mu_1 \triangleq m,\qquad \mu_3 \triangleq (\operatorname{Id}+\tau b)\sharp m\quad\text{and}\quad \boldsymbol{\mu} \triangleq (\operatorname{Id},\,\mathfrak{p}^1+\tau b \circ \mathfrak{p}^1) \sharp \alpha \end{equation*} \notag $$
for arbitrary $\tau > 0$ and $b \in L_2(X,\mu;X)$. In this case
$$ \begin{equation*} W_p(\mu_1,\mu_3) \leqslant \tau \|b\|_{L_2(X,\mu_1;X)}. \end{equation*} \notag $$
We introduce the notation
$$ \begin{equation*} \xi(\tau) \triangleq \|b\|_{L_2(X,\mu_1;X)} \cdot \zeta(W_p(\mu_1,\mu_3)). \end{equation*} \notag $$
By construction $\xi(\tau) \to 0$ as $\tau \to 0$. Then by Definition 5.4
$$ \begin{equation*} \begin{aligned} \, \phi(\mu_3) - \phi(\mu_1) &\leqslant \int_{X^3} x_2^\top (x_3-x_1)\,\boldsymbol{\mu}(dx_1\,dx_2\,dx_3)+W_p(\mu_1,\mu_3) \zeta(W_p(\mu_1,\mu_3)) \\ &=\int_{X^2} x_2^\top \tau b(x_1)\,\alpha(dx_1\,dx_2)+\tau \xi(\tau). \end{aligned} \end{equation*} \notag $$
Finally, disintegrating the measure $\alpha$, we arrive at the estimate
$$ \begin{equation*} \phi(\mu_3) - \phi(\mu_1) \leqslant \int_X \biggl( \int_X x_2^\top\,\alpha(dx_2\,|\, x_1), \tau b(x_1) \biggr)\,\mu_1(dx_1)+\tau \xi(\tau). \end{equation*} \notag $$
The proposition is proved.

5.3. The boundedness of the barycentric differential

Of a certain interest is the following property of uniform boundedness of barycentric superdifferentials in a certain sense in the case when $p=2$.

Proposition 5.6. Assume that $m \in \mathcal{P}_2(\mathbb R^d)$ and $\alpha$ are such that $\varsigma_2(m) \leqslant \alpha$. Also assume that a function $\phi\colon \mathcal{P}_2(\mathbb R^d) \to \mathbb{R}$ is locally Lipschitz and superdifferentiable at $m$. Then

$$ \begin{equation*} \|\gamma\|_{L_2(\mathbb R^d,m;\mathbb R^d)} \leqslant K_\alpha \end{equation*} \notag $$
for all $\gamma \in \partial^+_b\phi(m)$, where $K_\alpha$ is the Lipschitz constant of $\phi$ on the ball of radius $\alpha$.

Proof. We set $b=-\gamma^\top$ in the definition of the superdifferential. Then there exists a function $\xi\colon \mathbb{R} \to \mathbb{R}$ such that
$$ \begin{equation*} \begin{aligned} \, \phi((\operatorname{Id} - \tau \gamma^\top)\sharp m) - \phi(m) &\leqslant - \tau \int_{\mathbb R^d} \gamma^\top(x) \gamma(x)\,m(dx)+\tau \xi(\tau) \\ &=- \tau \| \gamma \|_{L_2(\mathbb R^d,m;\mathbb{R}^{d*})}^2+\tau \xi(\tau) \end{aligned} \end{equation*} \notag $$
for any $\tau > 0$. In view of Proposition 2.1 we note that
$$ \begin{equation*} \forall\, \epsilon > 0\ \exists\, \theta > 0 \ \forall\, \tau \in [0,\theta] \quad \varsigma_2((\operatorname{Id} - \tau \gamma)\sharp m) \leqslant \alpha+\epsilon. \end{equation*} \notag $$

The local Lipschitz property yields the inequalities

$$ \begin{equation*} \begin{aligned} \, \tau \|\gamma\|_{L_2(\mathbb R^d,m;\mathbb{R}^{d*})}^2 - \tau\xi(\tau) &\leqslant -\bigl( \phi((\operatorname{Id} - \tau \gamma^\top)\sharp m) - \phi(m) \bigr) \\ &\leqslant \bigl| \phi((\operatorname{Id} - \tau \gamma^\top)\sharp m) - \phi(m) \bigr| \\ &\leqslant K_{\alpha+\epsilon} W_2((\operatorname{Id} - \tau \gamma^\top)\sharp m,m) \\ &\leqslant K_{\alpha+\epsilon} \tau \biggl( \int_{\mathbb R^d} \|\gamma(x)\|^2 m(dx) \biggr)^{1/2}. \end{aligned} \end{equation*} \notag $$
Now, dividing by $\tau$ and letting it tend to $0$, we obtain
$$ \begin{equation*} \|\gamma\|_{L_2(\mathbb R^d,m;\mathbb{R}^{d*}}^2 \leqslant K_{\alpha+\epsilon} \|\gamma\|_{L_2(\mathbb R^d,m;\mathbb{R}^{d*})}. \end{equation*} \notag $$
Finally, letting $\epsilon$ tend to $0$ and using the right continuity of the Lipschitz constant, we deduce the estimate
$$ \begin{equation*} \|\gamma\|_{L_2(\mathbb R^d,m;\mathbb{R}^{d*})}^2 \leqslant K_\alpha \|\gamma\|_{L_2(\mathbb R^d,m;\mathbb{R}^{d*})}. \end{equation*} \notag $$
The proposition is proved.

The analogous assertion for subdifferentials can be proved similarly.

Proposition 5.7. Assume that $m \in \mathcal{P}_2(\mathbb R^d)$ and $\alpha > 0$ are such that $\varsigma_2(m) \leqslant \alpha$. Also assume that a function $\phi\colon \mathcal{P}_2(\mathbb R^d) \to \mathbb{R}$ is locally Lipschitz and subdifferentiable at $m$. Then

$$ \begin{equation*} \|\gamma\|_{L_2(\mathbb R^d,m;\mathbb{R}^{d*})} \leqslant K_\alpha \end{equation*} \notag $$
for all $\gamma \in \partial^-_b\phi(m)$, where $K_\alpha$ is the Lipschitz constant of $\phi$ on the ball of radius $\alpha$.

§ 6. Conclusions

In this paper we have considered an autonomous dynamical system in the space of probability measures specified by the nonlocal continuity equation. Sufficient conditions for local stability have been studied on the basis of Lyapunov’s second method. The above results can be transferred to questions of great interest, such as trajectory stability analysis and global stability problems.


Bibliography

1. V. C. Boffi and G. Spiga, “Continuity equation in the study of nonlinear particle-transport evolution problems”, Phys. Rev. A, 29:2 (1984), 782–790  crossref  adsnasa
2. M. Tucci and M. Volonteri, “Constraining supermassive black hole evolution through the continuity equation”, Astronom. and Astrophys., 600 (2017), A64, 17 pp.  crossref  adsnasa
3. L. W. Botsford, “Optimal fishery policy for size-specific, density-dependent population models”, J. Math. Biol., 12:3 (1981), 265–293  crossref  mathscinet  zmath
4. C. di Mario, N. Meneveau, R. Gil, P. Jaegere, P. J. de Feyter, C. J. Slager, J. R. T. C. Roelandt and P. W. Serruys, “Maximal blood flow velocity in severe coronary stenoses measured with a Doppler guidewire: limitations for the application of the continuity equation in the assessment of stenosis severity”, Am. J. Cardiol., 71:14 (1993), D54–D61  crossref
5. A. Pluchino, V. Latora and A. Rapisarda, “Changing opinions in a changing world: a new perspective in sociophysics”, Internat. J. Modern Phys. C, 16:04 (2005), 515–531  crossref  zmath
6. L. Ambrosio and G. Crippa, “Continuity equations and ODE flows with non-smooth velocity”, Proc. Roy. Soc. Edinburgh Sect. A, 144:6 (2014), 1191–1244  crossref  mathscinet  zmath
7. B. Bonnet and H. Frankowska, “Viability and exponentially stable trajectories for differential inclusions in Wasserstein spaces”, 2022 IEEE 61st conference on decision and control (CDC) (Cancun 2022), IEEE, 2022, 5086–5091  crossref
8. C. D'Apice, R. Manzo, B. Piccoli and V. Vespri, “Lyapunov stability for measure differential equations”, Math. Control Relat. Fields, 14:4 (2024), 1391–1407  crossref
9. A. M. Lyapunov, General stability problem for motions, D.Sc. dissertation, Imperial Moscow University, Moscow, 1892 (Russian)  mathscinet  zmath; French transl., A. Liapounoff, “Problème général de la stabilité du mouvement”, Ann. Fac. Sci. Univ. Toulouse (2), 9 (1907), 203–474  mathscinet  zmath
10. A. M. Lyapunov, “On the stability question for motions”, Soobshch. Kharkov. Mat. Obshch. (2), 3:1 (1893), 265–272 (Russian)  mathscinet
11. I. G. Malkin, Stability theory for motions, Gostekhizdat, Moscow–Leningrad, 1952, 432 pp. (Russian)  mathscinet  zmath; German transl., J. G. Malkin, Theorie der Stabilität einer Bewegung, R. Oldenbourg, München, 1959, xiii+402 pp.  mathscinet  zmath
12. N. N. Krasovskii, Stability of motion. Applications of Lyapunov{'}s second method to differential systems and equations with delay, Stanford Univ. Press, Stanford, CA, 1963, vi+188 pp.  mathscinet  mathscinet  zmath  zmath
13. C. Seis, “Optimal stability estimates for continuity equations”, Proc. Roy. Soc. Edinburgh Sect. A, 148:6 (2018), 1279–1296  crossref  mathscinet  zmath
14. I. Karafyllis and M. Krstic, “Stability results for the continuity equation”, Systems Control Lett., 135 (2020), 104594, 9 pp.  crossref  mathscinet  zmath
15. D. Shevitz and B. Paden, “Lyapunov stability theory of nonsmooth systems”, IEEE Trans. Automat. Control, 39:9 (1994), 1910–1914  crossref  mathscinet  zmath
16. C. Calcaterra, Dynamics and geometry on metric spaces: flows and foliations, Preprint at ResearchGate: 359061756, 2019, xvi+471 pp.
17. Yu. Orlov, Nonsmooth Lyapunov analysis in finite and infinite dimensions, Comm. Control Engrg. Ser., Springer, Cham, 2020, xix+340 pp.  crossref  mathscinet  zmath
18. A. N. Michel, Ling Hou and Ling Liu, Stability of dynamical systems. On the role of monotonic and non-monotonic Lyapunov functions, Systems Control Found. Appl., 2nd ed., Birkhäuser/Springer, Cham, 2015, xvii+653 pp.  crossref  mathscinet  zmath
19. F. H. Clarke, Yu. S. Ledyaev, R. J. Stern and P. R. Wolenski, Nonsmooth analysis and control theory, Grad. Texts in Math., 178, Springer-Verlag, New York, 1998, xiv+276 pp.  crossref  mathscinet  zmath
20. J.-P. Aubin, A. M. Bayen and P. Saint-Pierre, Viability theory. New directions, 2nd ed., Springer, Heidelberg, 2011, xxii+803 pp.  crossref  mathscinet  zmath
21. P. Cardaliaguet, F. Delarue, J.-M. Lasry and P.-L. Lions, The master equation and the convergence problem in mean field games, Ann. of Math. Stud., 201, Princeton Univ. Press, Princeton, NJ, 2019, x+212 pp.  crossref  mathscinet  zmath
22. L. Ambrosio, N. Gigli and G. Savaré, Gradient flows in metric spaces and in the space of probability measures, Lectures Math. ETH Zürich, Birkhäuser Verlag, Basel, 2005, viii+333 pp.  crossref  mathscinet  zmath
23. F. Santambrogio, Optimal transport for applied mathematicians. Calculus of variations, PDEs, and modeling, Progr. Nonlinear Differential Equations Appl., 87, Birkhäuser/Springer, Cham, 2015, xxvii+353 pp.  crossref  mathscinet  zmath
24. V. I. Bogachev, Measure theory, v. II, Springer-Verlag, Berlin, 2007, xiv+575 pp.  crossref  mathscinet  zmath
25. Yu. V. Averboukh, “A mean field type differential inclusion with upper semicontinuous right-hand side”, Vestn. Udmurt. Univ. Mat. Mekh. Komp'yut. Nauki, 32:4 (2022), 489–501  mathnet  crossref  mathscinet  zmath
26. W. Gangbo and A. Tudorascu, “On differentiability in the Wasserstein space and well-posedness for Hamilton–Jacobi equations”, J. Math. Pures Appl. (9), 125 (2019), 119–174  crossref  mathscinet  zmath
27. Yu. Averboukh and D. Khlopin, Pontryagin maximum principle for the deterministic mean field type optimal control problem via the Lagrangian approach, arXiv: 2207.01892
28. V. V. Kozlov, Thermal equilibrium in the sense of Gibbs and Poincaré, Institut Komp'yuternykh Issledovanii, Moscow–Izhevsk, 2002, 320 pp. (Russian)  mathscinet  zmath

Citation: Yu. V. Averboukh, A. M. Volkov, “Lyapunov stability of an equilibrium of the nonlocal continuity equation”, Sb. Math., 216:2 (2025), 140–167
Citation in format AMSBIB
\Bibitem{AveVol25}
\by Yu.~V.~Averboukh, A.~M.~Volkov
\paper Lyapunov stability of an equilibrium of the nonlocal continuity equation
\jour Sb. Math.
\yr 2025
\vol 216
\issue 2
\pages 140--167
\mathnet{http://mi.mathnet.ru/eng/sm10084}
\crossref{https://doi.org/10.4213/sm10084e}
\mathscinet{https://mathscinet.ams.org/mathscinet-getitem?mr=4894001}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2025SbMat.216..140A}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001487976300001}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-105004916943}
Linking options:
  • https://www.mathnet.ru/eng/sm10084
  • https://doi.org/10.4213/sm10084e
  • https://www.mathnet.ru/eng/sm/v216/i2/p3
  • This publication is cited in the following 1 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025