# Variations on a theme: Three open problems in short, fat matrices

Lately, my life has been consumed with the arduous process of writing grant proposals. Since I’m new to this process, I might be doing it wrong, but I took the opportunity to take a step back and look at my field of research from a different perspective. “The funder’s perspective,” I hear you thinking. Well, yes, but I’ve used the change in perspective to draw important connections in my field that I hadn’t completely identified in the past. One big connection in particular revolves around a theme of sorts:

Given a short, fat matrix, what can be said about the singular values of any submatrix formed by collecting some of its columns?

Now, a lot can actually be said along this theme using probabilistic techniques, and I will discuss this later. I’m particularly interested in three open problems: Kadison-Singer, deterministic matrices with the restricted isometry property (RIP), and numerically erasure-robust frames. In my opinion, these problems are much more important than the implications they have in quantum mechanics, compressed sensing, and communication. Indeed, each of these problems requires more than what state-of-the-art probabilistic techniques have to offer, and as such, they outline a fundamental gap in our mathematical understanding.

The Kadison-Singer Problem (reformulated). There exist universal constants $B\geq 4$ (rational) and $r\geq1$ (integer) such that for every $M\times BM$ unit norm tight frame $\Phi$, you can partition the columns into $r$ subcollections $\bigsqcup_{i=1}^r\mathcal{K}_i=\{1,\ldots,BM\}$ such that $\|\Phi_{\mathcal{K}_i}\|_2^2 for each $i=1,\ldots,r$.

For this and many other formulations of Kadison-Singer, check out this paper. Interestingly, some of the formulations seem much more likely to be true than others, and yet they are all equivalent. For the record, Kadison and Singer conjecture it to be false. (Yes, Kadison-Singer is false if Kadison and Singer are right.)  Proving this would require the construction of frames which produce a “large” matrix from any partition—sounds rather pigeonholey. On the other hand, it may seem feasible to prove it true using probabilistic techniques. My gut instinct is to partition the columns randomly; I mean, if one partition will work, surely most partitions should work. Indeed, my intuition is partially correct: You can expect a random subcollection of a unit norm tight frame to be well-conditioned, but the subcollection needs to be sufficiently large, i.e., $\Omega(M\log M)$. But log factors are not acceptable for Kadison-Singer, and so such probabilistic techniques are not good enough. To be fair, it might be the case that Kadison-Singer is false, and so no technique is good enough!

Deterministic matrices with the restricted isometry property (RIP). Pick $\delta<\sqrt{2}-1$ and deterministically construct an infinite family of $M\times N$ matrices $\Phi$ (with $M,N\rightarrow\infty$) such that the following holds for some universal constant $C$:

$(1-\delta)\|x\|^2\leq\|\Phi x\|^2\leq (1+\delta)\|x\|^2$

for all $x$ with support of size $\leq CM/\log N$.

I’ve thought quite a bit about this problem; I surveyed the literature and documented my progress in this paper. The main point is that it is very easy to construct these matrices using probabilistic techniques; for example, pick the matrix entries independently from a Gaussian distribution. Constructing them deterministically, on the other hand, has proven much harder. As incremental progress, when $\Phi$ is a unit norm tight frame with small worst-case coherence, we can say quite a bit. First, Gershgorin’s circle theorem quickly gives that the condition we want is satisfied whenever $x$ has support of size $\leq C\sqrt{M}$. Also, as Tropp established using probabilistic techniques, for almost every subset $\mathcal{K}\subseteq\{1,\ldots,N\}$ of size $\leq CM/\log N$, the condition is satisfied by all vectors $x$ of support $\mathcal{K}$. This is the best we could have hoped for, as the conditions we used (unit norm tight with low coherence) were too weak to guarantee RIP; to see this, consider the identity-Fourier concatenation of perfect-square order or Steiner equiangular tight frames. But regardless of what conditions we impose on the matrix, probabilistic techniques will only say something about almost all vectors, whereas RIP demands a guarantee for all vectors. This means we need better deterministic techniques to rival Gershgorin’s circle theorem along with solid deterministic constructions. Along these lines, Bourgain and others made some progress, devising a new technique called flat RIP and constructing matrices using additive combinatorics. Again, check out my paper on the subject for more information.

Numerically erasure-robust frames (NERFs). Pick some constant $C$ and find a family of $M\times N$ matrices $\Phi$ (with $M,N\rightarrow\infty$) such that every submatrix made up of half of $\Phi$‘s columns has condition number $\leq C$.

This is a new problem which came out of a recent paper of mine with Matt Fickus. Actually, the paper considers a more general setup in which an adversary removes a proportion $p$ of the columns, and you want to keep the condition number small. Using the best known results on extreme singular values of random matrices, we needed $p<0.15\ll0.5$ to get well-conditioning with Gaussian entries. On the other hand, we were able to deterministically analyze a certain equiangular tight frame to get well-conditioning whenever $p<0.5$. The analysis here is very delicate, as it doesn’t work at all after a slight change in the construction (to mutually unbiased bases). However, it does show that there is room for non-probabilistic techniques in solving this class of problems. I’d like to conclude by explaining why the $p=0.5$ threshold seems significant. Suppose $\Phi$ has all $\pm1$ entries. Then an adversary can remove half of the columns in such a way that the first two rows of what remains are either identical or opposite; in this case, the columns don’t even span, let alone have good conditioning. As such, matrices with $\pm1$ entries will not solve this problem. Furthermore, random matrix results always seem to work for $\pm1$-Bernoulli matrices without significant loss in effectiveness, and so probabilistic techniques may inherently fail to be good enough by virtue of this association.

Advertisements