In a previous post, I went through Gordon’s escape through a mesh theorem. This theorem is unique because it leverages certain properties of Gaussian processes (and a quantification of size called Gaussian width) instead of passing to an epsilon net. The escape theorem is particularly important to areas like compressed sensing, and provides optimal guarantees in light of phase transitions. However, real-world implementations of compressed sensing tend to not apply Gaussian transforms, but rather random matrices with other distributions (such as Bernoulli or partial Fourier). As such, it is desirable to relax the hypothesis on the transform for the escape theorem to hold.

It is evident that Gaussian width is the correct measure of a set for phase transitions in the Gaussian setting, and we expect similar phase transitions in other settings. As detailed in this paper, an important measure of a set for the other settings is the size of an epsilon net. This leads to the following fundamental question:

What does the Gaussian width of a set say about the smallest possible size of an epsilon net for that set?

It is unclear to me how much is known about this, but I do know that something called * Sudakov minoration* provides a bound. In particular, Sudakov minoration guarantees the existence of a maximal -packing of a certain size, and such a packing is necessarily a -covering. (why?) For the record, this blog post is based in part on these lecture notes. The following lemma is the main ingredient to Sudakov minoration:

**Slepian’s Lemma.** Let and be Gaussian random variables (not necessarily iid) with mean zero. If

for every and , then .

**Sudakov Minoration Principle.** Let denote a maximal -packing of a compact set . Then

,

where denotes Gaussian width and is a universal constant.

* Proof:* For every distinct , we have

,

where are iid . Then Slepian’s Lemma gives

,

where are iid . It remains to show that the expected value of the maximum of iid standard Gaussians is like , which is verified in a lemma we save for the end of this post.

It should be pointed out that this lemma leaves a bit to be desired. For example, take a maximal -packing in an -dimensional unit sphere, and consider the (disjoint) union of open balls of radius centered at each point in the packing. The total volume of these balls is given by , for some constant depending on , whereas the ball of radius which contains the entire union has volume . As such, we can compare volumes and isolate to get the following bound:

,

and so . For comparison, the Gaussian width of this sphere is like , and so the bound that Sudakov minoration provides is particularly weak when is small: .

Here’s the technical lemma we used in the proof of Sudakov minoration:

**Lemma.** If are iid , then .

* Proof:* The upper bound is not so important in the proof of Sudakov minoration, but the argument is given here. The idea is to make a Chernoff-type argument in terms of the moment generating function of a Gaussian. For the lower bound, let denote the random maximum of interest, take such that , and condition on the event that every (this occurs with probability ):

.

Next, since , we can continue the bound:

.

At this point, we seek a lower bound on . This can be achieved by rearranging

,

i.e., . Substituting this into the previous bound then gives , as desired.

Hello Dustin,

Thank you for this interesting post.

I guess there is a missing log() on the net size in the Sudakov minoration above.

Should be

Log |P_epsilon| <= C (w(S))^2/epsilon^2

Best,

Laurent

Thanks for catching that. That makes the estimate much more comparable to the volume comparison estimate in the sphere case.