|
On the multiplicative Chung-Diaconis-Graham process
I. D. Shkredov Steklov Mathematical Institute of Russian Academy of Sciences, Moscow, Russia
Abstract:
We study the lazy Markov chain on $\mathbb{F}_p$ defined as follows: $X_{n+1}=X_n$ with probability $1/2$ and $X_{n+1}=f(X_n) \cdot \varepsilon_{n+1}$, where the $\varepsilon_n$ are random variables distributed uniformly on the set $\{\gamma, \gamma^{-1}\}$, $\gamma$ is a primitive root and $f(x)=x/(x-1)$ or $f(x)=\mathrm{ind}(x)$. Then we show that the mixing time of $X_n$ is $\exp(O(\log p \cdot \log \log \log p/ \log \log p))$. Also, we obtain an application to an additive-combinatorial question concerning a certain Sidon-type family of sets.
Bibliography: 34 titles.
Keywords:
Markov chains, Chung-Diaconis-Graham process, mixing time, incidence geometry, Sidon sets.
Received: 13.07.2022 and 09.11.2022
§ 1. Introduction1.1. The Chung-Diaconis-Graham process [6] is the random walk on $\mathbb{F}_p$ (or, more generally, on $\mathbb{Z}/n\mathbb{Z}$ for composite $n$) defined by
$$
\begin{equation}
X_{j+1}=a X_j+\varepsilon_{j+1} ,
\end{equation}
\tag{1.1}
$$
where $a\in \mathbb{F}_p^*$ is a fixed residue and the random variables $\varepsilon_j$ are independent and identically distributed (in the original paper [6] the variables $\varepsilon_j$ were distributed uniformly on the set $\{-1,0,1\}$ and $a=2$). This process has been studied extensively; see [4], [6]–[10] and other papers. In our article we are interested in the following characteristic of $X_n$, which is called the mixing time and is defined by
$$
\begin{equation*}
t_{\mathrm{mix}}(\varepsilon) :=\inf \biggl\{n\colon \max_{A \subseteq \mathbb {F}_p} \biggl| \mathrm{P} (X_n \in A)-\frac{|A|}{p} \biggr| \leqslant \varepsilon\biggr\} .
\end{equation*}
\notag
$$
Usually one takes a particular value of the parameter $\varepsilon$, for example, $\varepsilon=1/4$, and below we speak about $t_{\mathrm{mix}} := t_{\mathrm{mix}} (1/4)$. A simple random walk on $\mathbb{F}_p$ has mixing time $t_{\mathrm{mix}}$ of order $p^2$ (see [16]), and it was shown in [6] (also see the recent paper [7]) that the mixing time of the process (1.1) is at most $O(\log p \cdot \log \log p)$. Hence the Chung-Diaconis-Graham process gives an example of a speedup phenomenon, that is, a phenomenon of the decreasing time of convergence. In [8] a more general nonlinear version of the Chung-Diaconis-Graham process was studied, defined by
$$
\begin{equation}
X_{j+1}=f(X_j)+\varepsilon_{j+1} ,
\end{equation}
\tag{1.2}
$$
where $f$ is a bijection of $\mathbb{F}_p$. In particular, it was proved that for rational functions of bounded degree (defined appropriately at the poles; see [8]) the mixing time is
$$
\begin{equation}
t_{\mathrm{mix}}\biggl(\frac 14\biggr)=O(p^{1+\varepsilon}) \quad \forall\, \varepsilon>0.
\end{equation}
\tag{1.3}
$$
Perhaps, for the process (1.2) the correct answer is $t_{\mathrm{mix}} = O(\log p)$, but it has been obtained only in the case when $f(x) = 1/x$ for $x\neq 0$ and $f(0)=0$: see [9]. The proof was based on the method of $\mathrm{SL}_2(\mathbb{F}_p)$-action from [3]. In [4] it was asked whether other explicit examples of Markov chains with low mixing time could be provided. Our paper is devoted to a multiplicative form of the Chung-Diaconis-Graham process. Multiplicative variants of this process were studied in [1], [11], [12], [15] and other papers. Consider the family of functions
$$
\begin{equation}
f^{\alpha, \beta}_*(x)=\frac{x}{\alpha x+\beta},
\end{equation}
\tag{1.4}
$$
where $\alpha, \beta \neq 0$. Most of our results below do not depend on $\alpha$ and $\beta$, so we do not indicate these parameters in such cases. In Theorems 1 and 7 we need $f^{\alpha,\beta}_* (x)$ to be a bijection, so we put $f^{\alpha,\beta}_* (-\beta/\alpha):= 1/\alpha$. In turn, Theorems 1 and 7 do not depend on the particular choice of $(\alpha,\beta)$, and one can consider $\alpha =1$ and $\beta=-1$, and write $f_* (x) := f^{1,-1}_* (x)$. Let us formulate a particular case of our main result. Theorem 1. Let $p$ be a prime number and $\gamma \in \mathbb{F}_p^*$ be a primitive root. Also, let $\varepsilon_{j}$ be the random variables distributed uniformly on $\{\gamma,\gamma^{-1}\}$. Consider the lazy Markov chain $0\neq X_0,X_1,\dots,X_n,\dots$ defined by
$$
\begin{equation}
X_{j+1}= \begin{cases} f_* (X_{j}) \cdot \varepsilon_{j+1} & \textit {with probability } \dfrac12, \\ X_{j} & \textit {with probability } \dfrac12 . \end{cases}
\end{equation}
\tag{1.5}
$$
Then for any $c>0$ and any $n = c \exp(\log p \cdot \log \log \log p/\log \log p)$ one has
$$
\begin{equation*}
\| P_n-U\| :=\max_{A \subseteq \mathbb {F}^*_p} \biggl| \mathrm{P} (X_n \in A)- \frac{|A|}{p-1} \biggr| \leqslant K e^{-Kc} ,
\end{equation*}
\notag
$$
where $K>0$ is an absolute constant. The same is true for the chain $X_{j+1}={f_*(X_{j}) \cdot \varepsilon_{j+1}}$, where $\varepsilon_j$ denote the random variables distributed uniformly on $\{ 1, \gamma^{-1}, \gamma\}$. In other words, the mixing time $n$ of our Markov chain is expressed by the above formula. Using a similar method we obtain the same bound for another chain, where $f_*(x)=\mathrm{ind}(x)$, and for the chain of the form (1.2) where $f(x)=\exp(x)$; see Theorem 9 and formulae (3.21), (3.22) below (the functions $\mathrm{ind}$ and $\mathrm{exp}$ are defined in § 2). We have also considered other chains (which may even be more interesting than (1.5)), namely,
$$
\begin{equation}
X_{j+1}= \begin{cases} \mathrm{ind} (X_{j}) \cdot \varepsilon_{j+1} & \text {with probability } \dfrac12, \\ X_{j} & \text {with probability }\dfrac12 \end{cases}
\end{equation}
\tag{1.6}
$$
($X_0 \neq 0$) or as in (1.2) for $f(x)= \exp(x)$, namely,
$$
\begin{equation}
X_{j+1}= \begin{cases} \exp (X_{j})+\varepsilon_{j+1} & \text {with probability }\dfrac12, \\ X_{j} & \text {with probability } \dfrac12. \end{cases}
\end{equation}
\tag{1.7}
$$
As a byproduct, we show that in the case when $f(x)=x^2$ and $p \equiv 3 \pmod 4$ the mixing time of (1.2) is actually $O(p\log p)$: see Remark 2. We underline again that the expected order of $t_{\mathrm{mix}}$ in all these problems is probably $O(\log p)$, but this can be a hard question (especially, in view of some special constructions in the affine group which show that there are families of so-called ‘rich’ transformations having exactly subexponential lower bounds for the number of incidences; see [17], Theorem 15). Our approach is not analytic as in [8], but it uses some methods from additive combinatorics and incidence geometry. In particular, we apply some results on growth in the affine group $\mathrm{Aff}(\mathbb{F}_p)$. The core of our article has much more in common with [3] and [27] than with [8], but we use extensively the general line of the proof in that paper. From the additive-combinatorial point of view the main innovation is a series of asymptotic formulae for the incidences of points and lines, which were obtained via the action of $\mathrm{Aff}(\mathbb{F}_p)$, see the beginning of § 3. The author hopes that such formulae are interesting in their own right. It is well known (see, for example, [3], [17], [18], [22], [26]–[28] and [30]) that incidence geometry and the sum-product phenomenon work better sometimes than classical analytic methods, and this is why it is possible to break the square-root barrier, which corresponds to the natural bound (1.3) (for details see Theorem 8 and the proofs of Theorems 7 and 9). It turns out that the same method is applicable to a purely additive-combinatorial question on Sidon sets. Sidon sets are a classical subject in combinatorial number theory; see, for example, the survey [19]. Recall that a subset $S$ of an abelian group $\mathbf{G}$ with group operation $*$ is called a $g$-Sidon set if for any $z\neq 1$ the equation $z=x*y^{-1}$, where $x,y\in S$, has at most $g$ solutions. If $g=1$, then we arrive to the classical definition of Sidon sets [29]. Given an arbitrary set $A\subseteq \mathbf{G}$, we denote by $\mathsf{Sid}^*(A)$ the size of the maximal (in cardinality) Sidon subset of the set $A$ and we write $\mathsf{Sid}^*_K(A)$ for the maximal $K$-Sidon set. It is known [14] (also see [24]) that for any subset $A$ of our abelian group $\mathbf{G}$ the following estimate takes place:
$$
\begin{equation*}
\mathsf{Sid}^*(A) \gg \sqrt{|A|},
\end{equation*}
\notag
$$
and Klurman and Pohoata, as described in [13], asked about a possible improvement of the last bound, given two different operations on a ring $\mathbf{G}$. In [28] this author obtained the following result. Theorem 2. Let $A\subseteq \mathbb{F}$ be a set, where $\mathbb{F} = \mathbb{R}$ or $\mathbb{F} = \mathbb{F}_p$ (in the prime field case suppose, in addition, that $|A|<\sqrt{p}$). Then there exist absolute constants $c>0$ and $K\geqslant 1$ such that
$$
\begin{equation}
\max \{\mathsf{Sid}^{+}_K (A), \mathsf{Sid}_K^{\times}(A) \} \gg |A|^{1/2+c}.
\end{equation}
\tag{1.8}
$$
As concerns upper bounds for (1.8), see [20] and [28]. Notice that $\mathsf{Sid}_K^{\times}(A) = \mathsf{Sid}_K^{+}(\log (A))$ and $\mathsf{Sid}_K^{+}(A) = \mathsf{Sid}_K^{\times}(\exp (A))$ for $A\subseteq \mathbb{R}^+$. Hence it is possible to rewrite the bound (1.8) in terms of a single operation. Now we consider a general question, which was mentioned by Warren at the CANT-2021 conference (see [34]). Problem 1. Let $f$ and $g$ be some ‘nice’ (say, convex or concave) functions. Is it true that for any set $A\subset \mathbb{R}^+$ one has
$$
\begin{equation*}
\max \{\mathsf{Sid}^{+}_K (A), \mathsf{Sid}_K^{+}(f(A)) \}\gg |A|^{1/2+c}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\max \{\mathsf{Sid}^{\times}_K (A), \mathsf{Sid}_K^{\times}(g(A)) \} \gg |A|^{1/2+c} ?
\end{equation*}
\notag
$$
Here $c>0$ and $K\geqslant 1$ are some absolute constants. What can be said in the case when $K$ is exactly equal to $1$, and what is the optimal value of $c>0$? In this paper we obtain an affirmative answer for $g(x)=x+1$ and $f(x)=\exp(x)$, where in the case of $\mathbb{F}_p$ the latter function is defined by $\exp(x) := g^x$, where $g$ is a fixed primitive root. Theorem 3. Let $A\subseteq \mathbb{F}$ be a set, where $\mathbb{F} = \mathbb{R}$ or $\mathbb{F} = \mathbb{F}_p$ (in the prime field case suppose, in addition, that $|A|<\sqrt{p}$). Then there exist absolute constants $c>0$ and $K\geqslant1$ such that
$$
\begin{equation}
\max \{\mathsf{Sid}^{\times}_K (A), \mathsf{Sid}_K^{\times}(A+1) \} \gg |A|^{1/2+c}
\end{equation}
\tag{1.9}
$$
and
$$
\begin{equation}
\max \{\mathsf{Sid}^{+}_K (A), \mathsf{Sid}_K^{+}(\exp(A)) \} \gg |A|^{1/2+c} .
\end{equation}
\tag{1.10}
$$
On the other hand, for any integer $k\geqslant 1$ there exists $A \subseteq \mathbb{F}$ such that
$$
\begin{equation}
\max \{\mathsf{Sid}^{\times}_k (A), \mathsf{Sid}^{\times}_k (A+1) \} \ll k^{1/2} |A|^{3/4} .
\end{equation}
\tag{1.11}
$$
We thank Jimmy He for very useful discussions and valuable suggestions. Also, we thank the referee for valuable comments.
§ 2. Definitions and preliminaries We let $\mathbf{G}$ denote an abelian group. Sometimes we underline the group operation by writing $+$ or $\times$ at the quantities under consideration (as energy, the representation function and so on, see below). Let $\mathbb{F}$ be the field $\mathbb{R}$ or $\mathbb{F}=\mathbb{F}_p = \mathbb{Z}/p\mathbb{Z}$ for a prime number $p$. Let $\mathbb{F}^* = \mathbb{F} \setminus \{0\}$. We use the same upper-case letter to denote a set $A\subseteq \mathbb{F}$ and its characteristic function $A\colon \mathbb{F} \to \{0,1 \}$, and in the case of finite $\mathbb{F}$ we write $f_A (x) := A(x) - |A|/|\mathbb{F}|$ for the balanced function of $A$. Given two sets $A,B\subset \mathbf{G}$, we define the sumset of $A$ and $B$ by
$$
\begin{equation*}
A+B:=\{a+b\colon a\in{A},\,b\in{B}\}.
\end{equation*}
\notag
$$
In a similar way we define the difference sets and higher sumsets: for example, ${2A-A}$ is $A+A-A$. We write $\dotplus$ for a direct sum, so that $A\dotplus B=\{{a+b} \colon {a\in A}, {b\in B}\}$ and $a+b=a'+b'$ implies that $a=a'$ and $b=b'$. We also use the notation $A \times B=\{(a,b)\colon a\in A,\, b\in B\} \subseteq \mathbf G \times \mathbf G$ for the Cartesian product of $A$ and $B$. For an abelian group $\mathbf{G}$ the Plünnecke-Ruzsa inequality (see, for example, [32]) holds, namely,
$$
\begin{equation}
|nA-mA| \leqslant \biggl( \frac{|A+A|}{|A|} \biggr)^{n+m} |A| ,
\end{equation}
\tag{2.1}
$$
where $n$ and $m$ are arbitrary positive integers. For representation functions we use notation like $r_{A+B}(x)$ or $r_{A-B}(x)$ and so on; they count the number of ways $x \in \mathbf{G}$ can be expressed as a sum $a+b$ or as $a-b$ for $a\in A$ and $b\in B$, respectively. For example, $|A| = r_{A-A} (0)$. In a similar way, given some functions $f_1,\dots,f_k\colon\mathbf{G} \to \mathbb{C}$, we write $r_{f_1+\dots+f_k}(x) = \sum_{x_1+\dots+x_k=x} f_1(x_1) \dotsb f_k(x_k)$. For two sets $A,B \subseteq \mathbf{G}$ the additive energy of $A$ and $B$ is defined by
$$
\begin{equation*}
\mathsf{E} (A,B)=\mathsf{E}^{+} (A,B)=\bigl|\{(a_1,a_2,b_1,b_2) \in A\times A \times B \times B \colon a_1-b_1=a_2-b_2 \}\bigr| .
\end{equation*}
\notag
$$
If $A=B$, then we simply write $\mathsf{E}(A)$ for $\mathsf{E}(A,A)$. More generally, for sets $A_1,\dots, A_{2k}$ belonging to an arbitrary (noncommutative) group $\mathbf{G}$ and $k\geqslant 2$ we define the energy $\mathsf{T}_{k} (A_1,\dots, A_{2k})$ by
$$
\begin{equation}
\begin{aligned} \, \notag &\mathsf{T}_{k} (A_1,\dots, A_{2k}) = \bigl|\bigl\{(a_1, \dots, a_{2k}) \in A_1 \times \dots \times A_{2k} \colon \\ &\qquad\qquad a_1 a^{-1}_2 \dotsb a_{k-1} a^{-1}_k=a_{k+1} a^{-1}_{k+2} \dotsb a_{2k-1} a^{-1}_{2k} \bigr\}\bigr| \end{aligned}
\end{equation}
\tag{2.2}
$$
for even $k$, while for odd $k$ we set
$$
\begin{equation}
\begin{aligned} \, \notag &\mathsf{T}_{k} (A_1,\dots, A_{2k}) =\bigl|\bigl\{(a_1, \dots, a_{2k}) \in A_1 \times \dots \times A_{2k} \colon \\ &\qquad\qquad a_1 a^{-1}_2 \dotsb a^{-1}_{k-1} a_k=a_{k+1} a^{-1}_{k+2} \dotsb a^{-1}_{2k-1} a_{2k} \bigr\}\bigr| . \end{aligned}
\end{equation}
\tag{2.3}
$$
In a similar way, for real functions $f_1,\dots, f_{2k} \colon \mathbf G \to \mathbb{C}$ (for example, when $k$ is an even integer) we set
$$
\begin{equation*}
\mathsf{T}_{k} (f_1,\dots, f_{2k}) =\sum_{a_1 a^{-1}_2 \dotsb a_{k-1} a^{-1}_k=a_{k+1} a^{-1}_{k+2} \dotsb a_{2k-1} a^{-1}_{2k}} f_1(a_1) \dotsb f_{2k} (a_{2k}) .
\end{equation*}
\notag
$$
If we want to underline the group operation $+$, then we use the notation $\mathsf{T}^{+}_{k} (f_1,\dots, f_{2k})$, so that (for even $k$, for instance)
$$
\begin{equation*}
\mathsf{T}^{+}_{k} (f_1,\dots, f_{2k}) =\sum_{a_1 -a_2+\dots+a_{k-1}-a_k=a_{k+1}-a_{k+2}+\dots+ a_{2k-1}-a_{2k}} f_1(a_1) \dotsb f_{2k} (a_{2k}) .
\end{equation*}
\notag
$$
We write $\mathsf{T}_k (f)$ instead of $\mathsf{T}_{k} (f,\dots, f)$. In the abelian case, for $k\geqslant 2$ put
$$
\begin{equation}
\mathsf{E}^{+}_k (A) :=\sum_{x\in \mathbf G} r^k_{A-A}(x)=\sum_{\alpha_1, \dots, \alpha_{k-1} \in \mathbf G} |A\cap (A+\alpha_1) \cap \dots \cap (A+\alpha_{k-1})|^2.
\end{equation}
\tag{2.4}
$$
(The reader can find the above identity, for instance, in [23].) Clearly, $|A|^k \leqslant \mathsf{E}^{+}_k (A) \leqslant |A|^{k+1}$. We also write $\widehat{\mathsf{E}}^{+}_k (A) = \sum_x r^k_{A+A}(x)$. Let $\mathrm{ord}(x)$ denote the multiplicative order of an element of $x\in \mathbb{F}^*_p$, and let $\mathrm{ind}(x)$ be defined by $x=g^{\mathrm{ind}(x)}$, where $g$ is a fixed primitive root of $\mathbb{F}_p^*$. It is convenient for us to think that the function $\mathrm{ind}(x)$ takes values from $1$ to $p-1$, and so $\mathrm{ind}(x)$ is defined on $\mathbb{F}_p^*$. In a similar way we denote by $\exp(x)\colon\mathbb{F}^*_p \to \mathbb{F}_p^*$ the function $\exp(x) = g^x$, where $x\in \mathbb{F}^*_p$. Let $\mathrm{Aff}(\mathbb{F})$ be the group of transformations $x\to ax+b$, where $a\in \mathbb{F}^*$ and $b\in \mathbb{F}$. Sometimes, we write $(a,b)\in \mathrm{Aff}(\mathbb{F})$ for the map $x\to ax+b$. The symbols $\ll$ and $\gg$ are the usual Vinogradov symbols. When the constants in them depend on a parameter $M$, we write $\ll_M$ and $\gg_M$. All logarithms are to base $2$. If we have a set $A$, $|A|>1$, then we write $a \lesssim b$ or $b \gtrsim a$ if $a = O(b \cdot \log^c |A|)$, where $c>0$ is an unspecified constant which may change from line to line. Now we now mention several useful results to which we will appeal in the text. We start with a result from [27]. Lemma 1. Let $f_1,\dots,f_{2k}\colon \mathbf{G} \to \mathbb{C}$ be some functions. Then
$$
\begin{equation}
\mathsf{T}^{2k}_{k} (f_1,\dots, f_{2k}) \leqslant \prod_{j=1}^{2k} \mathsf{T}_{k} (f_j).
\end{equation}
\tag{2.5}
$$
The next lemma (Lemma 2) on collinear quadruples in the set $A\times A$ (that is, the number of $4$-tuples $\{ (x_1,l(x_1)), \dots, (x_4,l(x_4))\}$, where $x\in A$, $l\in \mathrm{Aff}(\mathbb{F}_p)$ and $l(x_1), l(x_2), l(x_3), l(x_4) \in A$) was proved in [18], pp. 604–607. We denote the number of these collinear quadruples by $\mathsf{Q}(A)$ and rewrite the asymptotic formula for $\mathsf{Q}(A)$, namely,
$$
\begin{equation}
\mathsf{Q}(A)=\frac{|A|^8}{p^2}+O(|A|^5 \log |A|)
\end{equation}
\tag{2.6}
$$
(see [18], Theorem 11, (2)), in the convenient form (2.7), which can be used in further research. Finally, notice that an affine transformation of $\mathbb{F}_p$ can be identified with a line in $\mathbb{F}_p \times \mathbb{F}_p$ by means of its graph. Lemma 2. Let $A\subseteq \mathbb{F}_p$ be a set, and let $f_A (x) = A(x)-|A|/p$. Then
$$
\begin{equation}
\mathsf{Q} (f_A):=\sum_{l \in \mathrm{Aff}(\mathbb {F}_p)} \biggl| \sum_x f_A(x) f_A(lx) \biggr|^4 \ll |A|^5 \log |A| ,
\end{equation}
\tag{2.7}
$$
where the sum over $l$ in the last formula is taken over all affine transformations (lines) with at least two points in $A\times A$. Finally, we need Theorem A from [2], which we repeat as Theorem 4 below. A better dependence of $\delta'$ on $\delta$ of the form $\delta/(C_1 \log (C_2r/\delta))^k$, where $C_1,C_2>0$ are some constants, can be found in [26], Theorem 46 and Corollary 47. Also, we have shown in [26] that one does actually not need the condition $|A_j|\geqslant p^\delta$, $j\in[k]$. Theorem 4. Let $k\geqslant 2$, and let $A_1, \dots, A_k \subseteq \mathbb{F}_p$ be some sets such that $|A_j|\geqslant p^\delta$, $j\in [k]$. Suppose that $\prod_{j=1}^k |A_j| \geqslant p^{1+\delta}$. Then for any $\lambda \neq 0$ one has
$$
\begin{equation*}
\sum_{x_1\in A_1,\dots, x_k \in A_k} e^{2\pi i \lambda x_1 \dotsb x_k} \ll p^{-\delta'} |A_1|\dotsb |A_k| ,
\end{equation*}
\notag
$$
where $\delta' > (\delta/k)^{Ck}$ and $C>0$ is an absolute constant. In particular, for any $l \geqslant 2$ and $A_1 = \dots = A_k = A$ the inequality $\mathsf{T}^{+}_{l} (r_{f_A^k}) \leqslant |A|^{k(2l-1)} p^{-(2l-2)\delta'}$ holds. Recall that by definition $r_{f_A^k}(x) = \sum_{x_1\dotsb x_k = x} f(x_1)\cdots f(x_k)$, where $f_A (x)$ is the balanced function of a set $A$. We observe that it is possible to estimate $\mathsf{T}^{+}_{l} (r_{f_A^k})$ directly, as it was done in [25], Theorem 5 (also, see [17]), to improve the bound $\mathsf{T}^{+}_{l} (r_{f_A^k}) \leqslant |A|^{k(2l-1)} p^{-(2l-2)\delta'}$. We leave these calculations to the interested reader, because it would complicate the proof but would improve the final bound on $t_{\mathrm{mix}}$ just by $\log \log \log p$. Nevertheless, we use Theorem 5, which is a simplified version of Theorem 5 in [25], in our last section. Theorem 5. Let $A,B\subseteq \mathbb{F}_p$ be sets such that $|AB|\leqslant M|A|$, let $k\geqslant 2$, and assume that $|B| \gtrsim_k M^{2^{k+1}}$. Then
$$
\begin{equation}
\mathsf{T}^{+}_{2^k} (A) \lesssim_k M^{2^{k+1}} \biggl( \frac{|A|^{2^{k+1}}}{p}+ |A|^{2^{k+1}-1} |B|^{-(k-1)/2} \biggr) .
\end{equation}
\tag{2.8}
$$
§ 3. The proof of the main result We start with a counting result. Let $\mathcal{P}, \mathcal{L} \subseteq \mathbb{F}_p \times \mathbb{F}_p$ be a set of points and a set of lines, respectively. Then the number of incidences between $\mathcal{P}$ and $\mathcal{L}$ is
$$
\begin{equation}
\mathcal{I} (\mathcal{P},\mathcal{L}) :=|\{(q,l)\in \mathcal{P} \times \mathcal{L}\colon q\in l\}| .
\end{equation}
\tag{3.1}
$$
As above, we write $f_{\mathcal L} (x)=\mathcal L(x)-{|\mathcal L|}/{|\mathrm{Aff}(\mathbb {F}_p)|}$ for the balanced function of the set of lines $\mathcal{L}$. Proposition 1. Let $A,B \subseteq \mathbb{F}_p$ be sets and $\mathcal{L}$ be a set of affine transformations. Then for any positive integer $k$,
$$
\begin{equation}
\begin{aligned} \, \notag & \mathcal I (A\times B, \mathcal{L})-\frac{|A|\,|B|\,|\mathcal{L}|}{p} \\ &\qquad \ll \sqrt{|A|\,|B|\,|\mathcal L|}\, \max\Bigl\{ (\mathsf{T}_{2^k} (f_{\mathcal L}) |A| \log |A|)^{1/2^{k+2}}, \sqrt{|\mathcal{L}|} |A|^{-2^{-k}} \Bigr\} . \end{aligned}
\end{equation}
\tag{3.2}
$$
Proof. We have
$$
\begin{equation}
\mathcal I (A\times B, \mathcal{L})=\frac{|A|\,|B|\,|\mathcal{L}|}{p}+\sum_{x\in B} \sum_{l} f_{\mathcal L} (l) f_A (l x)=\frac{|A|\,|B|\,|\mathcal{L}|}{p}+\sigma .
\end{equation}
\tag{3.3}
$$
Thus we can use both balanced functions in (3.3) or we can choose just one of them. To estimate the error term $\sigma$ we use Hölder’s inequality several times as in [17] and [22], and obtain (below we use both balanced functions)
$$
\begin{equation*}
\sigma^2 \leqslant |B| \sum_{h} r_{f_{\mathcal{L}^{-1}} f_{\mathcal{L}} } (h) \sum_x f_A (x) f_A (h x)
\end{equation*}
\notag
$$
and, further,
$$
\begin{equation}
\sigma^{2^{k}} \leqslant |B|^{2^{k-1}} |A|^{2^{k-1}-1} \sum_{h} r_{(f_{\mathcal{L}^{-1}} f_{\mathcal{L}})^{2^{k-1}}} (h) \sum_x f_A (x) f_A (h x) .
\end{equation}
\tag{3.4}
$$
Here $\mathbf G$ is the (nonabelian) group $\mathrm{Aff}(\mathbb {F}_p)$, and we use the notation $r_f (x)$ for $f\colon \mathrm{Aff}(\mathbb {F}_p) \to \mathbb{C}$ and $x\in\mathrm{Aff}(\mathbb {F}_p)$. Recall that $r_{(f_{\mathcal{L}^{-1}} f_{\mathcal{L}})^{2^{k-1}}} (h)$ denotes the quantity
$$
\begin{equation*}
\sum_{x^{-1}_1 x'_1 \dotsb x^{-1}_{2^{k-1}} x'_{2^{k-1}}=h} f_{\mathcal{L}} (x_1) f_{\mathcal{L}} (x'_1) \dotsb f_{\mathcal{L}} (x_{2^{k-1}}) f_{\mathcal{L}} (x'_{2^{k-1}})
\end{equation*}
\notag
$$
in the last formula. We split the sum in (3.4) into the sum $\sigma_*$ taken over the lines $h$ (that is, elements of $\mathrm{Aff}(\mathbb {F}_p)$) having at least two points in $A\times A$, and the sum of the remaining terms, denoted by $\sigma_*$, and we suppose that $\sigma_* \leqslant 2^{-1} \sigma$. Then applying Hölder’s inequality again we derive that
$$
\begin{equation}
\begin{aligned} \, \notag \sigma^{2^{k+2}} &\ll |B|^{2^{k+1}} |A|^{2^{k+1}-4} \biggl(\sum_{h} |r^{4/3}_{(f_{\mathcal{L}^{-1}} f_{\mathcal{L}})^{2^{k-1}}} (h)| \biggr)^3 \mathsf{Q}(f_A) \\ &\ll 2^{2^{k+1}} |B|^{2^{k+1}} |A|^{2^{k+1}-4} \mathsf{T}_{2^k} (f_{\mathcal L}) |\mathcal{L}|^{2^{k+1}} \mathsf{Q}(f_A), \end{aligned}
\end{equation}
\tag{3.5}
$$
as required, in view of Lemma 2. It remains to estimate $\sigma_*$. We have
$$
\begin{equation*}
\sum_x f_A (x) f_A (h x)=\sum_x A (x) A (h x)-\frac{|A|^2}{p}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
r_{(f_{\mathcal{L}^{-1}} f_{\mathcal{L}})^{2^{k-1}}} (h) =r_{(\mathcal{L}^{-1} \mathcal{L})^{2^{k-1}}} (h)- \frac{|\mathcal L|^{2^k}}{p(p-1)}
\end{equation*}
\notag
$$
(of course, here $(\mathcal{L}^{-1} \mathcal{L})^{2^{k-1}}$ means a product in $\mathrm{Aff}(\mathbb{F}_p)$). Denote by $\Omega$ the set of lines with at most one point in $A\times A$. We can assume that $|A| \geqslant 2\sqrt{p}$, because otherwise we obtain
$$
\begin{equation}
\sigma^{2^{k+2}}_* \ll 2^{2^{k+2}} |B|^{2^{k+1}} |A|^{2^{k+1}-4} |\mathcal{L}|^{2^{k+2}},
\end{equation}
\tag{3.6}
$$
and this corresponds to the second quantity under the maximum sign in (3.2). Now, from [33] (also see [26] or just formula (3.20) below) we see that $|\Omega|\ll p^3/|A|^2$. Returning to (3.3) and (3.4) we find that
$$
\begin{equation*}
\sum_{h\in \Omega} r_{(f_{\mathcal{L}^{-1}} f_{\mathcal{L}})^{2^{k-1}}} (h) \sum_x f_A (x) f_A (h x) \ll 2^{2^k} |\mathcal L|^{2^k}+\frac{|\mathcal L|^{2^k} |A|^2 |\Omega|}{p^2(p-1)} \ll 2^{2^k} |\mathcal L|^{2^k},
\end{equation*}
\notag
$$
and thus we obtain (3.6) again. This completes the proof. The main advantage of the bound (3.2) in Proposition 1 is that we have an asymptotic formula for the number of incidences $\mathcal{I}(A\times B, \mathcal{L})$ (and the set $\mathcal{L}$ can be quite small), rather than a mere upper bound for $\mathcal{I}(\mathcal{P},\mathcal{L})$ as in [30]. An asymptotic formula for the quantity $\mathcal{I}(\mathcal{P},\mathcal{L})$ was known before in the specific case of large sets (see [33] or estimate (3.20) below) and in the case of Cartesian products, but for large sets of lines (see [26] and [30]). In the next lemma we estimate the energy $\mathsf{T}_k (\mathcal{L})$ for a concrete family of lines which appears in the proofs of results of our paper. Lemma 3. Let $A,B \subseteq \mathbb{F}^*_p$ be sets, and let $\mathcal{L} = \{(a,b)\colon a\in A,\, b\in B\}\subseteq \mathrm{Aff}(\mathbb{F}_p)$. Then for any $k\geqslant2$
$$
\begin{equation}
\mathsf{T}_k (f_{\mathcal{L}}) \leqslant |A|^{2k-1} \mathsf{T}^{+}_k (f_B) .
\end{equation}
\tag{3.7}
$$
Proof. We consider the case of even $k$; for odd $k$ the arguments are similar. One has $\mathcal{L}^{-1}\mathcal{L} = \{(a/c,(b-d)/c)\colon a,c\in A,\, b,d\in B\}$. Considering $\mathsf{T}_{2k}(f_{\mathcal{L}})$, we arrive at two equations for the slope and $y$-intercept of our line. The first equation for the slope is
$$
\begin{equation}
\frac{a_1 \dotsb a_k}{c_1 \dotsb c_k} =\frac{a'_1 \dotsb a'_k}{c'_1 \dotsb c'_k} .
\end{equation}
\tag{3.8}
$$
If we fix all variables $a_1, \dots, a_k, a'_1, \dots, a'_k, c_1, \dots, c_k, c'_1, \dots, c'_k \in A$, then the number of the solutions of the second equation is $\mathsf{T}^{+}_{2k}(\alpha_1 f_B, \dots, \alpha_{2k} f_B)$, where $\alpha_1,\dots, \alpha_{2k} \in \mathbb{F}_p^*$ are some elements of $A$ depending on the fixed variables. The last quantity is at most $\mathsf{T}^{+}_{2k}(f_B)$ by Lemma 1. Returning to (3.8) we obtain the required inequality. The lemma is proved. Now we can obtain our first principal result. Theorem 6. Let $A,B,X_1,Y_1,Y_2,Z \subseteq \mathbb{F}^*_p$ be sets, let $A=X Y_1$, $B=X Y_2$, $|A|=|X|\,|Y_1|/K_*$ and $|B|=|X|\,|Y_2|/K_*$. Suppose that $|Z| \geqslant p^\delta$ for some $\delta \gg {\log \log \log p}/{\log \log p}$, $M\geqslant 2 \delta^{-1}$ and $|X Z^M|\leqslant K|X|$. Then for a positive absolute constant $C$ one has
$$
\begin{equation}
\bigl|\bigl\{(a,b)\in A \times B \colon a :=f_* (b) \bigr\}\bigr| -\frac{K^2 K_*^2 |A|\,|B|}{p} \ll K K_* \sqrt{|A|\,|B|} \cdot p^{-\delta^{C/\delta}} .
\end{equation}
\tag{3.9}
$$
Proof. Let $\sigma$ be the quantity on the left-hand side of (3.9) and $k\geqslant 2$ be a parameter. Also, let $Q_1\!=\!AZ^M$ and $Q_2 \!=\! BZ^M$. Then $r_{Q_1 Z^{-M}}(a)=|Z|^M$ and ${r_{Q_2 Z^{-M}}(a)=|Z|^M}$ for any $a\in A$. Notice that $|Q_1|\leqslant |X Z^M|\,|Y_1| \leqslant K|X|\,|Y_1|=K K_* |A|$ and a similar bound holds for $|Q_2|$. We have
$$
\begin{equation*}
|Z|^{2M} \sigma \leqslant \biggl|\biggl\{(q_1,q_2,z_1,z_2)\in Q_1\times Q_2 \times Z^{M} \times Z^M \colon \frac{q_1}{z_1} :=f_* \biggl(\frac{q_2}{z_2}\biggr) \biggr\}\biggr| ,
\end{equation*}
\notag
$$
where $z_1$ and $z_2$ are taken with weights $r_{Z^M}(z_1)$ and $r_{Z^M}(z_2)$. Using the definition of the function $f_*$ we arrive at the equation
$$
\begin{equation}
\frac{q_1}{z_1}=\frac{q_2}{\alpha q_2+\beta z_2} \quad \Longrightarrow \quad \frac{z_1}{q_1}-\frac{\beta z_2}{q_2}=\alpha .
\end{equation}
\tag{3.10}
$$
The last equation can be interpreted as point/line incidences between the set of lines $\mathcal{L}$, where each $l\in \mathcal{L}$ has the form $l\colon z_1X-\beta z_2 Y = \alpha$, and the set of points $\mathcal{P} = Q^{-1}_1 \times Q^{-1}_2$. Applying Proposition 1, for any $k$ we obtain
$$
\begin{equation*}
\sigma-\frac{|Q_1|\,|Q_2|}{p} \ll |Z|^{-M} \sqrt{|Q_1|\,|Q_2|} \cdot \bigl(\mathsf{T}_{2^k} (f_{\mathcal{L}}) |Q_1|^4 p^{-2} \log pr)^{1/2^{k+2}} .
\end{equation*}
\notag
$$
Using our bounds for the sizes of the sets $Q_1$ and $Q_2$ and combining them with Lemma 3 and Theorem 4 we obtain
$$
\begin{equation*}
\sigma-\frac{K^2 K^2_* |A|\,|B|}{p} \lesssim K K_* \sqrt{|A|\,|B|} \cdot(p^{2-\delta'(2^k-2)} \log p)^{1/2^{k+2}} .
\end{equation*}
\notag
$$
Taking $k$ such that $\delta' (2^k-2) \geqslant 3.1$ yields
$$
\begin{equation*}
\sigma-\frac{K^2 K^2_* |A|\,|B|}{p} \ll K K_* \sqrt{|A|\,|B|} p^{-\delta'/100}.
\end{equation*}
\notag
$$
This completes the proof. Remark 1. Once again, if one uses a better formula for $\delta'$ of the form $\delta/(C_1 \log (C_2r/\delta))^k$, where $C_1,C_2>0$ are constants (see the remark before Theorem 4), then the restriction for $\delta$ in Theorem 6 can be relaxed to $\delta \gg {\log \log \log \log p}/{\log \log p}$ and, moreover, a direct estimate for $\mathsf{T}_{2^k}$ gives $\delta \gg {1}/{\log \log p}$. On the other hand this will complicate the proof but will improve the final bounds on $t_{\mathrm{mix}}$ only slightly, so we leave this to the interested reader. Corollary 1. Let $g$ be a primitive root and $I,J \subseteq \mathbb{F}^*_p$ be two geometric progressions with the same base $g$ such that
$$
\begin{equation}
\exp\biggl(\frac{C \log p \cdot \log \log \log p}{\log \log p}\biggr) \ll |I|=|J| \leqslant \frac p2 ,
\end{equation}
\tag{3.11}
$$
where $C>0$ is an absolute constant. Then
$$
\begin{equation}
\bigl|\bigl\{(a,b)\in I \times J \colon a :=f_* (b) \bigr\}\bigr| \leqslant (1-\kappa) |I|,
\end{equation}
\tag{3.12}
$$
where $\kappa>0$ is an absolute constant. Proof. Let $I = a\cdot \{1,g,\dots, g^n\}$ and $J= b\cdot \{1,g,\dots, g^n\}$, where $n=|I|=|J|$. We use Theorem 6 for $A=I$, $B=J$, $Y_1 = \{a\}$, $Y_2 = \{b\}$, $X=\{1,g,\dots, g^n\}$, $K_*= 1$ and $Z = \{1,g,\dots, g^m\}$, where we define the quantity $\delta$ by $m:= p^\delta$, and let
$$
\begin{equation}
m\leqslant \frac{cn}M ,
\end{equation}
\tag{3.13}
$$
where $c=1/8$ and $M\geqslant 2/\delta$. Then $K=|XZ^M|/|X| \leqslant 1+2c$. By formula (3.9) we have
$$
\begin{equation*}
\bigl|\bigl\{(a,b)\in I \times J \colon a :=f_* (b) \bigr\}\bigr| -\frac{(1+2c)^2 |I|\,|J|}{p} \ll |I| \cdot p^{-\delta^{C/\delta}} ,
\end{equation*}
\notag
$$
where $C>0$ is an absolute constant. One has ${(1+2c)^2 |I|\,|J|}/{p} \leqslant (25/32) |I|$ because $n\leqslant p/2$. Recalling that $M\sim 1/\delta$ and ${\log n}/{\log p} \gg {\log \log \log p}/{\log \log p}$, we satisfy condition (3.13) by choosing $\delta \sim \log \log \log p/ \log \log p$, and we derive estimate (3.12) in view of our assumption (3.11). This completes the proof. Now we are ready to prove Theorem 1 stated in the introduction, which we now formulate in a slightly more general form (again, one can relax slightly condition (3.14) and the upper bound for $n$). In our arguments we use some parts of the proof from [8]. Theorem 7. Let $p$ be a prime number and $\gamma \in \mathbb{F}_p^*$ be an element of order at least
$$
\begin{equation}
\exp\biggl(\Omega\biggl(\frac{\log p \cdot \log \log \log p}{\log \log p}\biggr)\biggr) .
\end{equation}
\tag{3.14}
$$
Also, let $\varepsilon_{j}$ be the random variables distributed uniformly on the set $\{\gamma^{-1}, \gamma \}$. Consider the lazy Markov chain $0\neq X_0,X_1,\dots, X_n, \dots $ defined by
$$
\begin{equation*}
X_{j+1}=\begin{cases} f_* (X_{j}) \cdot \varepsilon_{j+1} & \textit{with probability }\dfrac12, \\ X_{j} & \textit{with probability }\dfrac12. \end{cases}
\end{equation*}
\notag
$$
Then for an arbitrary $c>0$ and any $n = c \exp(\log p \cdot \log \log \log p/ \log \log p)$ one has
$$
\begin{equation*}
\| P_n-U\| :=\max_{A \subseteq \mathbb {F}^*_p} \biggl| \mathrm{P} (X_n \in A)- \frac{|A|}{p-1} \biggr| \leqslant K e^{-Kc} ,
\end{equation*}
\notag
$$
where $K>0$ is an absolute constant. The same is true for the chain $X_{j+1}=f_* (X_{j}) \cdot \varepsilon_{j+1}$, where $\varepsilon_j$ is a random variable distributed uniformly on $\{1,\gamma^{-1},\gamma\}$. Proof. Let $P$ be an ergodic Markov chain on a $k$-regular directed graph ${G\!=\!G(V,E)}$. Let $h(G)$ be the Cheeger constant
$$
\begin{equation}
h(G)=\min_{|S| \leqslant |V|/2} \frac{e(S,S^c)}{k|S|} ,
\end{equation}
\tag{3.15}
$$
where $e(S,S^c)$ is the number of edges between $S$ and the complement of $S$. We need a result from [5] (a more compact version is Theorem 4.1 in [8]). Theorem 8. Let $P$ be an ergodic Markov chain on a graph $G=G(V,E)$. Consider the lazy chain $X_0,X_1,\dots, X_n, \dots $ with transition matrix $(I+P)/2$, which starts from some deterministic $X_0$. Then for any $c>0$ and $n = c h(G)^{-2} \log |V|$ one has
$$
\begin{equation*}
\max_{A\subseteq V} \biggl| \mathrm{P} (X_n \in A)-\frac{|A|}{|V|} \biggr| \leqslant e^{-Kc} ,
\end{equation*}
\notag
$$
where $K>0$ is an absolute constant. In our case $G=G(V,E)$, where $V=\mathbb{F}_p^*$ and $x \to y$ if and only if $y=f_*(x) \gamma^{\pm 1}$. Thus, our task is to estimate the Cheeger constant of $G$. Take any set $S$, $|S|\leqslant p/2$, and write it as a disjoint union $S=\bigsqcup_{j\in J} G_j$, where the $G_j$ are geometric progressions with step $\gamma^2$. Here and below we use the fact that $\mathbb{F}_p^*$ is cyclic, isomorphic to $\mathbb{Z}/(p-1)\mathbb{Z}$ and generated by a fixed primitive root $g$. Consider $z$, $z\gamma$ and $z\gamma^2$, where $z\in S$ is the right-hand endpoint (if it exists) of some $G_j$. Then $z\gamma^2 \in S^c$ and $z$ and $z\gamma^2$ are connected with $f^{-1}_* (z\gamma)$. The point $f^{-1}_* (z\gamma)$ belongs to either $S$ or $S^c$, but in any case we have an edge between $S$ and $S^c$. Let $J=J_0 \sqcup J_1$, where for $j\in J_0$ the set $G_j$ has no right-hand endpoint and $J_1 = J\setminus J_0$. Clearly, $|J_0| \leqslant 2|S|/\mathrm{ord}(\gamma)$. By the above argument
$$
\begin{equation}
\frac{|J_1|}{|S|} \geqslant \frac{|J|}{|S|}-\frac{2}{\mathrm{ord}(\gamma)} .
\end{equation}
\tag{3.16}
$$
We want to obtain another lower bound for $h(G)$, which works better in the case when $J$ is small. Put $L=|S|/|J|$, and let $\omega \in (0,1)$ be a small parameter, which we choose in what follows. One has $\sum_{j\in J}|G_j|\!=\!|S|$, hence ${\sum_{j \colon |G_j|\geqslant \omega L} |G_j| \geqslant (1-\omega) |S|}$. Splitting $G_j$ into intervals of length exactly $L_\omega := \omega L/2$, we see that what is left has size at most $2\omega |S|$. Hence we have obtained some geometric progressions $G'_i$, $i\in I$, of lengths $L_\omega$ and step $\gamma^2$ and such that $\sum_{i\in I} |G'_i| \geqslant (1-2\omega) |S|$. Put $S' = \bigsqcup_{i\in I} G'_j$, and let $\Omega = S\setminus S'$, $|\Omega| \leqslant 2\omega |S|$. In other words, we have $S' = XY$ and $|S'| = |X|\,|Y|\geqslant (1-2\omega)|S|$, where $X=[1,\gamma^2, \dots, \gamma^{2(L_\omega-1)}]$ and $Y$ is a certain set of multiplicative shifts. Clearly,
$$
\begin{equation}
\frac{e(S,S^c)}{|S|} \geqslant 1- \frac{e(S,S)}{|S|} \geqslant 1-8 \omega-\frac{e(S',S')}{|S|} .
\end{equation}
\tag{3.17}
$$
Put $Z= [1,\gamma^2, \dots, \gamma^{2(L'_\omega-1)}]$, where the quantity $\delta$ is defined by $L'_\omega = [p^\delta]$, and let
$$
\begin{equation}
m\leqslant \frac{cL_\omega}M ,
\end{equation}
\tag{3.18}
$$
where $c=1/8$ and $M\geqslant 2/\delta$. Then $|XZ^M|/|X| \leqslant 1+2c$. Also, by assumption the element $\gamma$ has order at least $\exp(\Omega(\log p \cdot \log \log \log p/\log \log p))$. Using Theorem 6 for $K=1+2c$ and $M\sim 1/\delta$ and taking $\delta \geqslant {C \log \log \log p}/{\log \log p}$ for a sufficiently large positive constant $C$, we obtain
$$
\begin{equation*}
\frac{e(S',S')}{|S|}-\frac{25 |S'|}{16 p} \ll p^{-\delta^{O(1/\delta)}} \leqslant \frac{1}{32} .
\end{equation*}
\notag
$$
Recalling that $|S'|\leqslant |S| \leqslant p/2$ we derive the inequality
$$
\begin{equation*}
\frac{e(S',S')}{|S|} \leqslant \frac{25 |S'|}{16 p}+\frac{1}{32} \leqslant \frac{25}{32}+\frac{1}{32}=\frac{13}{16} .
\end{equation*}
\notag
$$
Substituting the last formula into (3.17), taking sufficiently large $p$ and choosing $\omega = 2^{-8}$, say, we obtain $h(G) \geqslant 1/32$. We need to check condition (3.18). If it fails, then
$$
\begin{equation*}
\frac{|S|}{|J|}=L \delta^{-1} \ll L_\omega \delta^{-1} \ll |Z| \leqslant p^\delta \sim \exp \biggl(O\biggl(\frac{\log p \cdot \log \log \log p}{\log \log p}\biggr)\biggr) ,
\end{equation*}
\notag
$$
hence $|J| \gg |S| \exp (-O(\log p \cdot \log \log \log p/\log \log p))$. But then from (3.16) and our assumption $\mathrm{ord}(\gamma) = \exp(\Omega(\log p \cdot \log \log \log p/\log \log p))$ we see that in any case $h(G) \gg \exp (-O(\log p \cdot \log \log \log p/\log \log p))$. Combining the last bound for the Cheeger constant and Theorem 8, we derive the bound
$$
\begin{equation*}
n \leqslant \exp \biggl(O\biggl(\frac{\log p \cdot \log \log \log p}{\log \log p}\biggr)\biggr) .
\end{equation*}
\notag
$$
The last part of Theorem 7 follows by the same method, combined with the arguments from [4] and [8], § 4.3. We need to ensure that the bijection $f_* (f^{-1}_*(\,\cdot\,) \gamma)\colon \mathbb{F}_p^* \to\mathbb{F}_p^*$ has the same form as in (1.4) (with our usual convention that ${f_*(-\beta/\alpha) =1/\alpha}$, of course). This can be checked by a direct calculation or using the fact that $f_*$ corresponds to the standard action of a lower-triangular matrix in $\mathrm{GL}_2(\mathbb{F}_p)$. This completes the proof of Theorem 7. Remark 2. Consider the lazy Markov chain (1.2) for $f(x)=x^2$ and $p\equiv 3 \pmod 4$. Using the same argument as in the proof of Theorem 7, we arrive at the equation $y+a = f(x+b) = x^2 +2bx +b^2$, where $a$ and $b$ belong to a certain arithmetic progression $P$, and $x$ and $y$ are from a disjoint union of $J$ arithmetic progressions; see details in [8] — strictly speaking, now the stationary distribution is not uniform; it is given by the formula
$$
\begin{equation*}
\pi (\alpha)=(2p)^{-1} | \{\beta \in \mathbb {F}_p \colon \beta^2 \pm \gamma=\alpha\}| ;
\end{equation*}
\notag
$$
convergence and therefore the value of $t_{\mathrm{mix}}$ are defined relative to $\pi (\cdot)$ and, moreover, our graph is not regular, so that we must have a modification of the definition (3.15) (see [8], § 2.1). Nevertheless, it is easy to show that
$$
\begin{equation}
t_{\mathrm{mix}}=O(p\log p).
\end{equation}
\tag{3.19}
$$
Sketch of the proof of (3.19). As in the proof of Theorem 7, we take $S$ such that $|S|\leqslant p/2$ and define the graph $G=G(V,E)$, where $x\to y$ if and only if $y=x^2 \pm \gamma$. Next we define the sets and numbers $G_j$, $J$, $L$, $S' = P+Y$, $|P| \gg L$, and so on. As discussed above, we obtain the equation $y+a = x^2 +2bx +b^2$ for $a,b\in P$ and $x,y\in S'$, which can be interpreted as point/line incidences for the set of lines $\mathcal{L}$ of the form $Y=2bX + (b^2-a)$ and the set of points $\mathcal{P} = (y-x^2,x)$. One has $|\mathcal{L}|=|P|^2$ and $|\mathcal{P}|=|S'|^2$. Using the main result in [33] (also see [26]) we obtain
$$
\begin{equation}
\biggl| \mathcal{I} (\mathcal{P}, \mathcal{L})- \frac{|\mathcal{P}|\,|\mathcal{L}|}{p} \biggr| \leqslant \sqrt{|\mathcal{P}|\,|\mathcal{L}|p} .
\end{equation}
\tag{3.20}
$$
By formula (3.20) and the calculations as above (see details in [8], § 4.2) we obtain an expander if $|S|/J \sim |P| \gg \sqrt{p}$. Indeed, in this case
$$
\begin{equation*}
\frac{e(S',S')}{|S|} \leqslant \frac{|S'|^2}{p|S|}+|P|^{-2} |S|^{-1} \sqrt{|\mathcal{P}|\,|\mathcal{L}|p} \leqslant \frac{1}{2}+\sqrt{p |P|^{-2}} \leqslant 1-c
\end{equation*}
\notag
$$
for some $c>0$. Finally, if, by contrast, $J\gg |S|/\sqrt{p}$, then by an analogue of formula (3.16), we obtain $h(G) \gg 1/\sqrt{p}$. Hence, in view of Theorem 8 we see that the mixing time is $O(p\log p)$, as required. The method of the proof of Theorem 7 (also see Remark 2) allows us to produce easily some lazy Markov chains on $\mathbb{F}^*_p$ with mixing time $O(p\log p)$, for example,
$$
\begin{equation}
X_{j+1}=\begin{cases} \mathrm{ind} (X_{j}) \cdot \varepsilon_{j+1} & \text{with probability }\dfrac12, \\ X_{j} & \text{with probability }\dfrac12 \end{cases}
\end{equation}
\tag{3.21}
$$
($X_0 \neq 0$) or as in (1.2) for $f(x)= \exp(x)$, namely,
$$
\begin{equation}
X_{j+1}=\begin{cases} \exp (X_{j})+\varepsilon_{j+1} & \text{with probability }\dfrac12, \\ X_{j} & \text{with probability }\dfrac12 . \end{cases}
\end{equation}
\tag{3.22}
$$
Indeed, in the first chain we arrive at the equation $ya=\mathrm{ind}(x) + \mathrm{ind} (b)$ and in the second at $y+b=\exp(x) \cdot \exp(a)$. Both equations correspond to point/line incidences. We underline again that our function $\mathrm{ind}(x)$ is defined on $\mathbb{F}_p^*$ but not on $\mathbb{F}_p$ (again, for additive Markov chain one can extend $\exp(x)$ by $\exp(0)=0$ to obtain a bijection) but, actually, the two values $\exp(0)$ and $\mathrm{ind}(0)$ produce just negligible error terms to the incidences. Anyway, in reality, one has a much better bound for the mixing time of the two Markov chains above. Theorem 9. Let $p$ be a prime number, and let $\gamma \in \mathbb{F}^*_p$. Then the mixing time of Markov chain (3.22) is $\exp(O(\log p \cdot \log \log \log p/\log \log p))$. If, in addition, the order of $\gamma$ is $\exp(\Omega(\log p \cdot \log \log \log p/\log \log p))$, then the mixing time of the Markov chain (3.21) is $\exp(O(\log p\cdot \log \log \log p/\log \log p))$. Proof. Our arguments follow the same scheme as in the proofs of Theorem 6 and Theorem 7. In both cases we need to estimate the energy $\mathsf{T}_{2^k}$ of the set of affine transformations $L$ of the form $x\to gx+s$, where the coefficients $g$ belong to $ \Gamma = a \cdot \{1,\gamma,\dots,\gamma^n\}$ and $s\in P$ belongs to a geometric or an arithmetic progression of size $n=\sqrt{|L|}$, respectively. In view of Proposition 1 this allows us to estimate the number of incidences solving the equation $y=gx+s$ for $g\in \Gamma$ and ${s\in P}$. Below we can assume that $P$ and $\Gamma$ are sufficiently small (but greater than $\exp(\Omega(\log p \cdot \log \log \log p/\log \log p))$, of course). A further application of Lemma 3 is useless because $\mathsf{T}^{+}_{2^k} (P)$ is maximal. Nevertheless, we make an additional step by considering the set $L^{-1}L$ and noticing that any element of $L^{-1}L$ has the form $x\to g_2/g_1 x + (s_2-s_1)/g_1$, where $g_1,g_2\in \Gamma$ and $s_1,s_2 \in P$. Now, in view of the arguments in Lemma 3 our task is to estimate $|\Gamma|^{2^{k+1}-1} \mathsf{T}^{+}_{2^k} (f_{(P-P)/\Gamma})$. Set $W=Q/\Gamma$, where $Q=P-P$, and let $\overline{Q} = Q+Q$. Let us obtain a crude lower bound for $|W|$, namely, $|W| \gg n^{3/2}$. Indeed, we can use the bound
$$
\begin{equation*}
\mathsf{E}^\times (P,\Gamma) \leqslant n^{-2} \bigl|\bigl\{(g_1,g_2,q_1,q_2,\overline{q}_1, \overline{q}_2) \in \Gamma^2\times Q^2 \times \overline{Q}^2 \colon (\overline{q}_1-q_1) g_1=(\overline{q}_2-q_2) g_2\bigr \}\bigr|,
\end{equation*}
\notag
$$
and so for small $P$ and $\Gamma$ we have $\mathsf{E}^\times (P,\Gamma) \ll n^{5/2}$ by Rudnev’s theorem [21]. In fact, the last bound implies that $|W| \gg n^{3/2}$ in view of the Cauchy-Schwarz inequality. We set $X = \{1,\gamma,\dots,\gamma^m\}^{-1}\subset\Gamma^{-1}$ and $m=p^\delta$. Suppose that we have
$$
\begin{equation}
\frac{4}{\delta}\, m \leqslant c n^{1/2}
\end{equation}
\tag{3.23}
$$
for a sufficiently small $c>0$. If so, then for any integer $M\leqslant 4/\delta$ we see that
$$
\begin{equation*}
|W X^M|=\biggl|\frac{Q}{\Gamma \cdot X^M}\biggr| \leqslant |W|+M n m=\frac{5|W|}4,
\end{equation*}
\notag
$$
say. Using Theorem 4 for $k\leqslant 4/\delta$ we obtain
$$
\begin{equation*}
\mathsf{T}^{+}_{2^k}(f_{(P-P)/\Gamma}) \lesssim \log p \cdot (|P|^2 |\Gamma|)^{2^{k+1}} p^{-\delta'(2^k-2)}.
\end{equation*}
\notag
$$
Again, we choose $k$ so that $\delta' (2^k-2) \gg 1$. Hence, arguing as in Lemma 3 we obtain
$$
\begin{equation*}
\mathsf{T}_{2^{k+1}} (f_L) \lesssim |\Gamma|^{2^{k+1}-1} 2^{2r} \log p \cdot (|P|^2 |\Gamma|)^{2^{k+1}} p^{-\delta'(2^k-2)} \ll |L|^{2^{k+2}} p^{-\delta'(2^k-2)}.
\end{equation*}
\notag
$$
After this we use the same argument as in the proof of Theorem 7. Finally, to have (3.23) we choose $\delta \sim \log \log \log p/ \log \log p$ and recall our assumption that $\gamma$ has order $\exp(\Omega(\log p \cdot \log \log \log p/\log \log p))$. This completes the proof.
§ 4. Combinatorial applications Now we apply the technique developed to Sidon sets, in which we follow the arguments in [28]. We need Lemmas 3 and 7 and Theorem 4 from that paper. Lemma 4. Let $A\subseteq \mathbf{G}$ be a set. Then for any $k\geqslant 2$
$$
\begin{equation}
\mathsf{Sid}_{3k-3} (A) \gg \biggl( \frac{|A|^{2k}}{\mathsf{E}_k (A)} \biggr)^{1/(2k-1)} \quad\textit{and}\quad \mathsf{Sid}_{2k-2} (A) \gg \biggl( \frac{|A|^{2k}}{\widehat{\mathsf{E}}_k (A)} \biggr)^{1/(2k-1)} .
\end{equation}
\tag{4.1}
$$
Lemma 5. Let $A\subseteq \mathbf{G}$ be a set, let $A=B+C$, and let $k\geqslant 1$ be an integer. Then
$$
\begin{equation*}
\mathsf{Sid}_k (A) \leqslant \min \Bigl\{|C| \sqrt{k|B|}+|B|, |B| \sqrt{k|C|}+ |C| \Bigr\} .
\end{equation*}
\notag
$$
Theorem 10. Let $A\subseteq \mathbf{G}$ be a set, and let $\delta, \varepsilon \in (0,1]$ be parameters such that $\varepsilon \leqslant \delta$. 1) Then there exists $k=k(\delta, \varepsilon) = \exp(O(\varepsilon^{-1} \log (1/\delta)))$ such that either ${\mathsf{E}_k (A) \leqslant |A|^{k+\delta}}$ or there exist $H\subseteq \mathbf{G}$, $|H| \gtrsim |A|^{\delta(1-\varepsilon)}$, $|H+H|\ll |A|^\varepsilon |H|$, and ${Z\subseteq \mathbf{G}}$ such that $|Z|\,|H|\ll |A|^{1+\varepsilon}$ and
$$
\begin{equation*}
|(H\dotplus Z) \cap A| \gg |A|^{1-\varepsilon} .
\end{equation*}
\notag
$$
2) In a similar way either there is a set $A'\subseteq A$, $|A'| \gg |A|^{1-\varepsilon}$, and a set $P\subseteq \mathbf{G}$, $|P| \gtrsim |A|^\delta$, such that for all $x\in A'$ one has $r_{A-P}(x) \gg |P| |A|^{-\varepsilon}$ or $\mathsf{E}_k (A) \leqslant |A|^{k+\delta}$ for some $k \ll 1/\varepsilon$. To deal with the real setting we need the famous Szemerédi-Trotter theorem (see [31]). Theorem 11. Let $\mathcal{P}$ and $\mathcal{L}$ be finite sets of points and lines in $\mathbb{R}^2$. Then
$$
\begin{equation*}
\mathcal{I} (\mathcal{P}, \mathcal{L}) \ll (|\mathcal{P}|\,|\mathcal{L}|)^{2/3}+|\mathcal{P}|+|\mathcal{L}| .
\end{equation*}
\notag
$$
Now we are ready to prove Theorem 2. Take any $\delta<1/2$, for example, $\delta=1/4$, and let $\varepsilon \leqslant \delta/4$ be a parameter, which we choose in what follows. In view of Lemma 4 we see that the inequality $\mathsf{E}^{\times}_k (A) \leqslant |A|^{k+\delta}$ implies that
$$
\begin{equation}
\mathsf{Sid}^{\times}_{3k-3} (A) \gg |A|^{1/2+ (1-2\delta)/(2(2k-1))}=|A|^{1/2+1/(4(2k-1))},
\end{equation}
\tag{4.2}
$$
as required. Here $k=k(\varepsilon)$. Otherwise there exist $H\subseteq \mathbb{F}$, $|H| \gtrsim |A|^{\delta(1-\varepsilon)} \geqslant |A|^{\delta/2}$, $|HH| \ll |A|^{\varepsilon} |H|$, and $Z\subseteq \mathbb{F}$ such that $|Z|\,|H| \ll |A|^{1+\varepsilon}$ and $|(H\cdot Z) \cap A| \gg |A|^{1-\varepsilon}$. Here the product of $H$ and $Z$ is direct. Put $A_* = (H\cdot Z) \cap A$, $|A_*| \gg |A|^{1-\varepsilon}$. We want to estimate $\mathsf{E}^{\times}_{l+1} (A_* +1)$ or $\widehat{\mathsf{E}}_{l+1}^{\times} (A_* +1)$ for large $l$. After that, once we have a good upper bound for $\mathsf{E}^{\times}_{l+1} (A_* +1)$ or $\widehat{\mathsf{E}}_{l+1}^{\times} (A_* +1)$, we use Lemma 4 again to find a large multiplicative Sidon subset of $A_*$. First of all, notice that in view of (2.1) we have
$$
\begin{equation*}
|H A^{-1}_*| \leqslant |H H^{-1}|\,|Z| \ll |A|^{2\varepsilon} |H|\,|Z| \ll |A|^{1+3\varepsilon} .
\end{equation*}
\notag
$$
In other words, the set $A^{-1}_*$ almost does not grow after multiplication by $H$. Let $Q = H A^{-1}_*$, $|Q| \ll |A|^{1+3\varepsilon}$ and also let $M=|A|^{\varepsilon}$. Secondly, fix any $\lambda \neq 0, 1$. Then the number of solutions of the equation $a_1 / a_2 = \lambda$, where $a_1,a_2 \in A_*+1$, does not exceed
$$
\begin{equation*}
\sigma_\lambda :=|H|^{-2r} \biggl|\biggl\{h_1,h_2\in H,\, q_1,q_2\in Q \colon \frac{h_1/q_1+1}{h_2/q_2+1}=\lambda \biggr\}\biggr| .
\end{equation*}
\notag
$$
The last equation has the form (3.10), namely,
$$
\begin{equation*}
\frac{h_1}{q_1}-\frac{\lambda h_2}{q_2}=\lambda-1,
\end{equation*}
\notag
$$
and it can be interpreted as a question about the number of incidences between points and lines. For each $\lambda \neq 0,1$ the quantity $\sigma_\lambda$ can be estimated as follows:
$$
\begin{equation}
\sigma_\lambda \ll |H|^{-2} \cdot |Q|\,|H|^{2-\kappa} \ll |A|^{1+3\varepsilon} |H|^{-\kappa},
\end{equation}
\tag{4.3}
$$
similarly to the proof of Theorem 6 above (in the case $\mathbb{F}=\mathbb{R}$ the same is true in view of Theorem 11). Here $\kappa = \kappa(\delta)>0$. Indeed, by our assumption that $|A|<\sqrt{p}$, Theorem 5, Proposition 1 and Lemma 3 we have
$$
\begin{equation}
\sigma_\lambda-\frac{|Q|^2}{p} \lesssim |Q|\,|H|^{-1/2} (|Q| \mathsf{T}^{+}_{2^r} (H))^{1/2^{r+2}} \lesssim |Q| \sqrt{M} \bigl( M^3 |A|\,|H|^{-(r+1)/2} \bigr)^{1/2^{r+2}},
\end{equation}
\tag{4.4}
$$
provided that $|H| \gtrsim_r M^{2^{r+1}}$ and $|H|^{r+1} \ll p$. Here $r$ is a parameter and we choose $r\sim 1/\delta$ to satisfy the second condition. To have the first condition we just take $\varepsilon {2^{r+1}} \ll \delta$ (in other words, $\varepsilon \leqslant \exp(-\Omega(1/\delta))$) and obtain the required result because $|H| \gg |A|^{\delta/2}$. Further using the inequalities $|H| \gg |A|^{\delta/2}$ and $|A_*| \gg |A|^{1-\varepsilon}$ and formula (4.3), and choosing any $\varepsilon \leqslant \delta \kappa/100$, after some calculations we obtain $\sigma_\lambda \ll |A_*|^{1-\delta \kappa/4}$. Hence, taking sufficiently large $l \gg (\delta \kappa)^{-1}$ we derive that
$$
\begin{equation*}
\begin{aligned} \, \widehat{\mathsf{E}}_{l+1}^{\times} (A_*) &=\sum_{\lambda} r^{l+1}_{A_* A_*} (\lambda) \ll |A_*|^{l+1}+(|A_*|^{1-\delta \kappa/2})^l |A_*|^2 \\ &\ll |A_*|^{l+1}+|A|^{l+2-\delta\kappa l/2} \ll |A_*|^{l+1}. \end{aligned}
\end{equation*}
\notag
$$
Applying Lemma 4 and choosing $\varepsilon \ll l^{-1}$, we see that
$$
\begin{equation*}
\begin{aligned} \, \mathsf{Sid}^\times_{2l} (A) &\geqslant \mathsf{Sid}^\times_{2l} (A_*) \gg |A_*|^{(l+1)/(2l+1)} \gg|A|^{((1-\varepsilon)(l+1))/(2l+1)} \\ & =|A|^{1/2+(1-2\varepsilon(l+1))/(2(2l+1))} \gg |A|^{1/2+c} , \end{aligned}
\end{equation*}
\notag
$$
where $c = c(\delta) >0$ is an absolute constant. We have obtained the bound (1.8) in Theorem 2. As concerns estimate (1.10), we use the same argument as above, but now our analogue of the quantity $\sigma_\lambda$ is $\exp(q_1)\exp(h_1)-\exp(q_2)\exp(h_2) = \lambda$, where $q_1,q_2 \in Q=A_*+H$ and $h_1, h_2\in H$. The last equation can be treated as a question on point/line incidences between the set of lines $x\exp(h_1)-y\exp(h_2) = \lambda$ of size $|\mathcal{L}| = |H|^2$ and the corresponding set of points $\mathcal{P}$ of size $|Q|^2$. Then analogues of bounds (4.3) and (4.4) take place and the proof is complete. It remains to obtain estimate (1.11) in the theorem. For any sets $X_1$, $X_2$ and $X_3$ consider the set
$$
\begin{equation*}
R[X_1,X_2,X_3]=\biggl\{\frac{x_1-x_3}{x_2-x_3} \colon x_1,x_2,x_3 \in X,\, x_2 \neq x_3 \biggr\} .
\end{equation*}
\notag
$$
If $X_1=X_2=X_3=X$, then we put $R[X_1,X_2,X_3] = R[X]$. One can check that $1-R[X_1,X_2,X_3] = R[X_1,X_3,X_2]$. For $\mathbb{F} = \mathbb{R}$ or $\mathbb{F}=\mathbb{F}_p$ we put $X=P$ and $A=R[X]$, where $P = \{1,\dots, n \}$ and $\overline{P} = \{-n, \dots, n \}$ and $n<\sqrt{p}$ in the case of $\mathbb{F}_p$. Then $A$ is contained in $\overline{P} / \overline{P} := B \cdot C$, and in view of Lemma 5 any multiplicative $k$-Sidon subset of $A$ has size at most $O (\sqrt{k} |A|^{3/4})$ because one can check that $|A| \gg |P|^2$. Now, $1-A=A$, and so the same argument applies to the set ${1-A}$. It remains to notice that $\mathsf{Sid}^\times (X) = \mathsf{Sid}^\times (-X)$ for any set $X$. Finally, we observe that there is an alternative (but maybe slightly harder) way to obtain estimate (1.11). Indeed, consider $R[\Gamma]$, where $\Gamma\subseteq \mathbb{F}_p^*$, $|\Gamma|< \sqrt{p}$, is a multiplicative subgroup (we consider the case $\mathbb{F}=\mathbb{F}_p$). One can notice that $R[\Gamma] = (\Gamma-1)/(\Gamma-1)$ and repeat the above argument. This paper is dedicated to Academician A. N. Parshin.
|
|
|
Bibliography
|
|
|
1. |
C. Asci, “Generating uniform random vectors”, J. Theoret. Probab., 14:2 (2001), 333–356 |
2. |
J. Bourgain, “Multilinear exponential sums in prime fields under optimal entropy condition on the sources”, Geom. Funct. Anal., 18:5 (2009), 1477–1502 |
3. |
J. Bourgain and A. Gamburd, “Uniform expansion bounds for Cayley graphs of $\mathrm{SL}_2(\mathbb {F}_p)$”, Ann. of Math. (2), 167:2 (2008), 625–642 |
4. |
S. Chatterjee and P. Diaconis, “Speeding up Markov chains with deterministic jumps”, Probab. Theory Related Fields, 178:3–4 (2020), 1193–1214 |
5. |
Fan Chung, “Laplacians and the Cheeger inequality for directed graphs”, Ann. Comb., 9:1 (2005), 1–19 |
6. |
F. R. K. Chung, P. Diaconis and R. L. Graham, “Random walks arising in random number generation”, Ann. Probab., 15:3 (1987), 1148–1165 |
7. |
S. Eberhard and P. P. Varjú, “Mixing time of the Chung-Diaconis-Graham random process”, Probab. Theory Related Fields, 179:1–2 (2021), 317–344 |
8. |
J. He, “Markov chains on finite fields with deterministic jumps”, Electron. J. Probab., 27 (2022), 28, 17 pp. |
9. |
J. He, Huy Tuan Pham and Max Wenqiang Xu, “Mixing time of fractional random walk on finite fields”, Electron. J. Probab., 27 (2022), 133, 15 pp. |
10. |
M. Hildebrand, “A lower bound for the Chung-Diaconis-Graham random process”, Proc. Amer. Math. Soc., 137:4 (2009), 1479–1487 |
11. |
M. Hildebrand, “Random processes of the form $X_{n+1}=a_n X_n+b_n \pmod p$ where $b_n$ takes on a single value”, Random discrete structures (Minneapolis, MN 1993), IMA Vol. Math. Appl., 76, Springer, New York, 1996, 153–174 |
12. |
M. Hildebrand, “Random processes of the form $X_{n+1}=a_n X_n+b_n \pmod p$”, Ann. Probab., 21:2 (1993), 710–720 |
13. |
C. Pohoata, Sidon sets and sum–product phenomena https://pohoatza.wordpress.com/2021/01/23/sidon-sets-and-sum-product-phenomena/ |
14. |
J. Komlós, M. Sulyok and E. Szemerédi, “Linear problems in combinatorial number theory”, Acta Math. Acad. Sci. Hungar., 26:1–2 (1975), 113–121 |
15. |
I. A. Kruglov, “Random sequences of the form $X_{t+1}=a_t X_t+b_t$ modulo $n$ with dependent coefficients $a_t$, $b_t$”, Discrete Math. Appl., 15:2 (2005), 145–151 |
16. |
D. A. Levin and Y. Peres, Markov chains and mixing times, 2nd ed., Amer. Math. Soc., Providence, RI, 2017, xvi+447 pp. |
17. |
B. Murphy, “Upper and lower bounds for rich lines in grids”, Amer. J. Math., 143:2 (2021), 577–611 |
18. |
B. Murphy, G. Petridis, O. Roche-Newton, M. Rudnev and I. D. Shkredov, “New results on sum–product type growth over fields”, Mathematika, 65:3 (2019), 588–642 |
19. |
K. O'Bryant, “A complete annotated bibliography of work related to Sidon sequences”, Electron. J. Combin., 2004, Dynamic Surveys, DS11, 39 pp. |
20. |
O. Roche-Newton and A. Warren, “Additive and multiplicative Sidon sets”, Acta Math. Hungar., 165:2 (2021), 326–336 |
21. |
M. Rudnev, “On the number of incidences between points and planes in three dimensions”, Combinatorica, 38:1 (2018), 219–254 |
22. |
M. Rudnev and I. D. Shkredov, “On the growth rate in $\mathrm{SL}_2(\mathbb {F}_p)$, the affine group and sum–product type implications”, Mathematika, 68:3 (2022), 738–783 |
23. |
T. Schoen and I. D. Shkredov, “Higher moments of convolutions”, J. Number Theory, 133:5 (2013), 1693–1737 |
24. |
A. S. Semchenkov, “Maximal subsets free of arithmetic progressions in arbitrary sets”, Math. Notes, 102:3 (2017), 396–402 |
25. |
I. D. Shkredov, “Some remarks on the asymmetric sum–product phenomenon”, Mosc. J. Comb. Number Theory, 8:1 (2019), 15–41 |
26. |
I. D. Shkredov, “On asymptotic formulae in some sum–product questions”, Trans. Moscow Math. Soc., 2018 (2018), 231–281 |
27. |
I. D. Shkredov, “Modular hyperbolas and bilinear forms of Kloosterman sums”, J. Number Theory, 220 (2021), 182–211 |
28. |
I. D. Shkredov, “On an application of higher energies to Sidon sets”, Combinatorica, 43 (2023), 329–345 |
29. |
S. Sidon, “Ein Satz über trigonometrische Polynome und seine Anwendung in der Theorie der Fourier-Reihen”, Math. Ann., 106:1 (1932), 536–539 |
30. |
S. Stevens and F. de Zeeuw, “An improved point-line incidence bound over arbitrary fields”, Bull. Lond. Math. Soc., 49:5 (2017), 842–858 |
31. |
E. Szemerédi and W. T. Trotter, Jr., “Extremal problems in discrete geometry”, Combinatorica, 3:3–4 (1983), 381–392 |
32. |
T. Tao and Van H. Vu, Additive combinatorics, Cambridge Stud. Adv. Math., 105, Cambridge Univ. Press, Cambridge, 2006, xviii+512 pp. |
33. |
L. A. Vinh, “The Szemerédi-Trotter type theorem and the sum–product estimate in finite fields”, European J. Combin., 32:8 (2011), 1177–1181 |
34. |
A. Warren, Additive and multiplicative Sidon sets, Report at CANT–2021 http://www.theoryofnumbers.com/cant/CANT2021-abstracts.pdf |
Citation:
I. D. Shkredov, “On the multiplicative Chung-Diaconis-Graham process”, Mat. Sb., 214:6 (2023), 136–154; Sb. Math., 214:6 (2023), 878–895
Linking options:
https://www.mathnet.ru/eng/sm9811https://doi.org/10.4213/sm9811e https://www.mathnet.ru/eng/sm/v214/i6/p136
|
|