In my previous blog entry, I kicked off an online project to make progress on the deterministic RIP matrix problem by optimizing the analysis in a breakthrough paper by Bourgain et al. Since then, I launched a wiki to maintain a “scoreboard” of world records for this problem, and Afonso Bandeira made his own contribution to the project. The purpose of this entry is to introduce some additive combinatorics and to develop some understanding of how they are leveraged to prove RIP for a particular matrix construction. While this entry does not immediately produce a new world record, it does optimize the analysis of Corollaries 3 and 4 of the paper. I plan to generalize Lemmas 9 and 10 in a future blog post, which will lead to an explicit optimization problem that incorporates the work in this entry (and a new world record, hopefully).

**— A brief introduction to additive combinatorics —**

In this section, we briefly detail some key ideas from additive combinatorics. Given an additive group and finite sets , we can define the **sumset**

,

the **difference set**

(not to be confused with a difference set), and the **additive energy**

.

These definitions are useful in quantifying the additive structure of a set. In particular, consider the following:

**Lemma.** A nonempty subset of some additive group satisfies the following inequalities:

(i)

(ii)

(iii)

with equality precisely when is a translate of some subgroup of .

* Proof:* For (i), pick . Then . Considering

,

we have equality in (i) precisely when for every . Equivalently, given , then for every , addition by permutes the members of . This is further equivalent to the following: Given , then for every , addition by permutes the members of . It certainly suffices for to be a group, and it is a simple exercise to verify that this is also necessary.

The proof for (ii) is similar.

For (iii), we note that

,

with equality precisely when has the property that for every . Again, it clearly suffices for to be a group, and necessity is a simple exercise.

The notion of additive structure is somewhat intuitive. You should think of a translate of a subgroup as having maximal additive structure. When the bounds (i), (ii) and (iii) are close to being achieved by (e.g., is an arithmetic progression), you should think of as having a lot of additive structure. Interestingly, while there are different measures of additive structure (e.g., and ), they often exhibit certain relationships (perhaps not surprisingly). The following is an example of such a relationship which is used throughout the paper by Bourgain et al.:

**Lemma (Corollary 1).** If , then there exists a set such that and .

In words, a set with a lot of additive energy necessarily has a large subset with a small difference set. This is proved using a version of the Balog–Szemeredi–Gowers lemma.

If translates of subgroups have maximal additive structure, then which sets have minimal additive structure? It turns out that random sets tend to (nearly) have this property, and one way to detect low additive structure is * Fourier bias*:

,

where the Fourier transform here has a factor out front (it’s not unitary). Indeed, captures how far is from its minimal value :

**Lemma.** For any subset of a finite additive group , we have

(i) , and

(ii) .

* Proof:* Define . Then (i) follows from Cauchy-Schwarz:

We will prove (ii) assuming , but the proof generalizes. Denote . For the left-hand inequality, we consider

,

where the last step is by expanding . Then Parseval’s identity simplifies this to . We use this to bound :

.

For the right-hand inequality, we apply Parseval’s identity:

From here, we apply the expansion

to continue:

.

Applying Parseval’s identity then gives

,

which is a rearrangement of the right-hand inequality.

**— The matrix construction —**

Now that we’ve covered the basics of additive combinatorics, let’s consider how these ideas are useful to the pursuit of deterministic RIP matrices. The main idea is that we desire a Gram matrix whose entries exhibit cancellations for weak flat RIP (see my previous post). By the last lemma, we can control cancellations of complex exponentials

in terms of the additive energy of the index set . This motivates us to pursue a Gram matrix whose entries are complex exponentials. To this end, consider the following vector:

,

where is prime and denotes the field of size . Such vectors are called * chirps*, and they are used in a variety of applications including radar. Here, we are mostly interested in the form of their inner products. If , then by the geometric sum formula. Otherwise, the inner product is much more interesting:

.

Since , we can complete the square in the exponent, and changing variables to gives

.

Finally, this can be simplified using a quadratic Gauss sum formula:

,

where is or (depending on whether is or ) and is a Legendre symbol, taking value depending on whether is a perfect square . Modulo these factors, the above inner product is a complex exponential, and since we want these in our Gram matrix , we will take to have columns of the form — in fact, the columns will be for some well-designed sets .

For weak flat RIP, we want to bound the following quantity for with :

.

For , define

- and
- .

These provide an alternate expression for the quantity of interest:

,

which admits the following bound by the triangle inequality:

.

Pleasingly, it now suffices to bound a sum of complex exponentials, which we feel equipped to do using additive combinatorics. The following lemma does precisely this (it can be viewed as an analog of the previous lemma).

**Lemma (Lemma 9).** For every and , we have

.

* Proof:* First, Cauchy-Schwarz gives

.

Expanding and rearranging then gives an alternate expression for the right-hand side:

.

Applying Cauchy-Schwarz again, we then have

,

and expanding this time gives

.

At this point, it is convenient to change variables, namely, and :

where and similarly for in terms of . We now apply Cauchy-Schwarz again to bound the sum in (1):

,

and changing variables (this change is invertible since ), we see that the right-hand side is a sum of squares of the Fourier coefficients of . As such, Parseval’s identity gives the following simplification:

.

Applying this bound to (1) gives the result.

**— How to construct —**

The previous lemma enables us to prove weak-flat-RIP-type cancellations in cases where both lack additive structure. Indeed, the method of this paper is to do precisely this, and the remaining cases (where either or has more additive structure) will find cancellations by accounting for the dilation weights . This seems to contradict the paragraph after (2.9) in the journal version of the paper (this paragraph appears to suffer from several typos) — here is what I suspect should be the corrected version:

… To prove the cancellation in (2.8), we basically split it into two cases: (i) when has less additive structure (i.e., is small), the cancellation comes from the sum over , (with , fixed), and (ii) when has more additive structure, one gets dispersion of the phases from the dilation weights (taking a large moment and using (2.4)). Incidentally, oscillations of the factor play no role in the argument.

Overall, we will be very close to proving that is RIP if most subsets of lack additive structure. To this end, the authors actually prove something much stronger: They design in such a way that all sufficiently large subsets have low additive structure. The following theorem is the first step in the design:

**Theorem (Theorem 5).** Fix , , and define the cube . Let denote the solution to the equation

.

Then for any subsets , we have

.

As a consequence of this theorem (taking ), we have for every , and since , this means that large subsets have , indicating low additive structure. However, is a subset of the group , whereas we need to construct a subset of . The trick here is to pick so that it inherits the additive structure of , and we use a * Freiman isomorphism* to accomplish this. In particular, we want a mapping such that if and only if , and we will take — this is what it means for and to be Freiman isomorphic, and it’s easy to see that Freiman isomorphic sets have the same sized sumsets, difference sets and additive energy. In this case, it suffices to take

.

Indeed, the -ary expansion of reveals the such that and . Also, adding and incurs no carries, so the expansion of reveals (even when ).

We already know that large subsets of (and ) exhibit low additive structure, but the above theorem only gives this in terms of the sumset, whereas Lemma 9 requires low additive structure in terms of additive energy. As such, we will first convert the above theorem into a statement about difference sets, and then apply Corollary 1 to further convert it in terms of additive energy:

**Corollary (essentially Corollary 3).** Fix , and according to Theorem 5, take as defined above, and pick and such that . Then every subset such that satisfies .

* Proof:* First note that is a translate of some other set . Explicitly, if , then we can take . As such, Theorem 5 gives

**Corollary (essentially Corollary 4).** Fix , and according to Theorem 5, take as defined above, and pick and such that . Then for every , there exists such that for every , every subset with satisfies .

* Proof:* Suppose to the contrary that there exists such that there are arbitrarily large for which there is a subset with and . Writing , then . By Corollary 1, there exists such that, for sufficiently large ,

,

and

.

However, this contradicts the previous corollary with and .

Notice that we can weaken our requirements on and if we had a version of Corollary 1 with a smaller exponent on . This exponent comes from a version of the Balog–Szemeredi–Gowers lemma (Lemma 6), which follows from the proof of Lemma 2.2 of this paper. (Specifically, take , and you need to change to , but this change doesn’t affect the proof.) The authors indicate that it would be desirable to prove a better version of this lemma, but I’m not sure how easy that will be. If anything, one should probably try to prove Corollary 1 directly (instead of going through Lemma 6).

## One thought on “Deterministic RIP matrices: Breaking the square-root bottleneck, II”