# Derandomizing restricted isometries via the Legendre symbol

I recently finished writing the paper based on a talk I gave at MATHEON at TU Berlin last year. (The talk was very fun to give — halfway through, it featured a game show in which Felix Krahmer won a bottle of Tabasco.) This work was a collaboration with Afonso Bandeira, Matt Fickus and Joel Moreira, and in the original version (presented in my talk), we mixed two ingredients to derandomize Bernoulli RIP matrices:

(i) a strong version of the Chowla conjecture, and

(ii) the flat RIP trick from this paper.

The Chowla conjecture essentially states that as $n$ gets large, if you randomly draw $X$ from $\{1,\ldots,n\}$, then consecutive entries of the Liouville function $\lambda(X+1),\ldots,\lambda(X+r)$ are asymptotically independent. The strong version of the Chowla conjecture that we used in (i) not only gave asymptotic independence, but also provided a rate of convergence. The intuition is that consecutive Liouville function entries behave very similarly to iid Bernoulli $\pm1$‘s, and so we should expect that populating a matrix with these entries will yield RIP. Indeed, we used flat RIP to demonstrate random-like cancellations that prove RIP. Unfortunately, the strong version of the Chowla conjecture that we used implies the Chowla conjecture, which has been open for almost 50 years.

A couple of months ago, Peter Sarnak pointed out a way around this difficulty. (No, he didn’t prove the Chowla conjecture.) It turns out that consecutive Legendre symbols exhibit asymptotic independence with the exact convergence rate that our proof requires. (!) This was originally proved by Harold Davenport, and the version of the result that we use is Theorem 1 in this paper. This changes our construction slightly (just replace Liouville with Legendre) and produces a main result which is no longer dependent on a conjecture:

Theorem. Given $N$, $K$ and $\delta$, take $M=(C_1/\delta^2)K\log^2 K\log N,\qquad H=C_2K\log((K/\delta)\log K)\log N$

with $C_1$ and $C_2$ sufficiently large, and let $p$ denote any prime $\geq2^H+MN$. Draw $X$ uniformly from $\{0,\ldots,2^H-1\}$, and define the corresponding $M\times N$ matrix $\Phi$ entrywise by the Legendre symbol $\displaystyle{\Phi[m,n]:=\frac{1}{\sqrt{M}}\bigg(\frac{X+M(n-1)+m}{p}\bigg).}$

Then $\Phi$ satisfies the $(2K,\delta)$-restricted isometry property with high probability.

In the small- $K$ regime, specifically $K^2\ll N$, this is the most derandomized RIP matrix to date with $\pm1$ entries and $M=O_\delta(K\mathrm{polylog} N)$. In the large- $K$ regime ( $K^2\gg N$), the record goes to a Johnson–Lindenstrauss alternative that leverages $t$-wise independent entries for some $t$ (this is a consequence of Theorem 2.2 in this paper). The most derandomized RIP matrix without the $\pm1$ entry constraint comes from an iterative application of this JL alternative (see this paper). For all three of these constructions, the number of random bits used is $O_\delta(K\log^2 N)$; in fact, two log factors appear in the entropy of each construction, the only difference being the type of log factors that appear.

Interestingly, JL projections necessarily require at least a certain number of random bits in order to nearly preserve the length of every vector with high probability. As such, unlike RIP matrices, JL projections can only be derandomized so much, and the derandomized JL projections mentioned above are within log factors of optimal. This leads to the following problem:

Problem (Breaking the Johnson–Lindenstrauss bottleneck). Find a construction of $M\times N$ matrices which satisfies the $(K,\delta)$-restricted isometry property with high probability whenever $M=O_{\delta}(K\mathrm{polylog}N)$ and uses only $H=o_{\delta;N\rightarrow\infty}(K\log(N/K))$ random bits.

Curiously, our Legendre symbol–based construction does not break the JL bottleneck.

Note that our main result didn’t use anything particularly special about the Legendre symbol (except its pseudo-randomness properties). In fact, as we show in the paper, our main result has a more general version which explicitly uses the convergence rate of asymptotically independent Bernoulli random variables. Such random variables are known in the literature under the buzzword small-bias sample space, and several constructions of small-bias sample spaces exist to date. However, as we show in the paper, no small-bias sample space can be used to break the JL bottleneck with our techniques (!), and of the constructions that currently exist, only the Legendre construction naturally leads to a fully derandomized RIP matrix conjecture:

Conjecture. There exists a universal constant $C$ such that for every $\delta>0$, there exists $N_0>0$ and $P_0(N)=O(2^{\mathrm{poly}(N)})$ such that for every $(K,M,N)$ satisfying $M\geq(C/\delta^2)K\log(N/K),\qquad N\geq N_0,$

and for every prime $p\geq P_0$, the $M\times N$ matrix $\Phi$ defined entrywise by the Legendre symbol $\displaystyle{\Phi[m,n]:=\frac{1}{\sqrt{M}}\bigg(\frac{M(n-1)+m}{p}\bigg)}$

satisfies the $(2K,\delta)$-restricted isometry property.

One might apply flat RIP to approach this conjecture, and interestingly, the conjecture statement essentially becomes a bound on incomplete sums of Legendre symbols, not too different those considered in this paper. Obviously, we had difficulty proving these cancellations, and so we resorted to injecting randomness into our construction.