Tight Frames and Approximation 2018

I just returned from an amazing workshop in New Zealand organized by Shayne Waldron. The talks and activities were both phenomenal! Here’s a photo by Emily King that accurately conveys the juxtaposition:

28059469_10112541778108014_4963753337247640734_n

A few of the talks gave me a lot to think about, and I wanted to take a moment to record some of these ideas.

Gary Greaves – Some restrictions on the characteristic polynomial of a Seidel matrix

How many lines can you pack through the origin of \mathbb{R}^d so that every pair of lines has the same interior angle? This is known as the equiangular lines problem, and it dates back to the work of Lemmens and Seidel. Recently, Zauner’s conjecture rejuvenated interest in this problem, and the last couple of years has seen a multitude of strides in obtaining new bounds on the maximum number N(d) of equiangular lines in \mathbb{R}^d. The best known bounds were provided in Brouwer and Haemers’ book, though this needs to be updated to reflect recent advances (perhaps a Sloane-esque online table would be appropriate for easy updating). In his talk, Gary described some of the techniques involved to produce these recent advances (see this paper). Here, the overall meta-trick is to leverage the spectrum of Seidel adjacency matrices, along with basic results from algebraic number theory, to conclude that Seidel matrices with certain properties can’t exist. This meta-trick strikes me as a generalization of how we find integrality conditions on the existence of real ETFs: When n\neq 2d, the Seidel adjacency matrix of an ETF must have integer eigenvalues, and so the coherence must be the reciprocal of an integer. It also reminds me of the computational disproofs of 19-by-76 and 20-by-96 real ETFs. Compared to ETFs, it seems that the equiangular lines problem allows one to ask many more questions in low dimensions, thereby giving this style of argument more legs. I’m interested to see what more will come from this line of inquiry.

Question: How easily can we prove that the Fickus Conjecture holds for ETFs with Gram matrices whose entries all amount to prime roots of unity?

This would represent an interesting approach to the Fickus Conjecture that resembles the existing partial progress on the HRT Conjecture: Prove that it holds for natural subclasses of the conjecture’s hypothesis. In this case of prime roots of unity, there is a known relationship to distance regular antipodal covers of the complete graph (DRACKNs), so presumably this discrete object would in turn yield integrality conditions that would corroborate the Fickus Conjecture.

Ferenc Szollosi – Enumeration of Seidel matrices

Seidel adjacency matrices are equivalent to equiangular lines in the real case. Indeed, given equiangular lines, select a unit vector that spans each line, and then the sign pattern of the Gram matrix is a Seidel adjacency matrix. (You can also go the other way.) Conjugating with a signed permutation matrix corresponds to permuting and negating vectors, which does not change the set of lines. How can you find all Seidel adjacency matrices up to this so-called switching equivalence? In 1975, Mallows and Sloane proved that the number of equivalence classes equals the number of Euler graphs. To generate all possibilities, Ferenc applies the canonical augmentation method. As part of this, he discovered a new way to represent equivalence classes. Instead of two-graphs, he passes to a “cherry graph” of 3n vertices, where each line corresponds to a triple of vertices (two red, one green; a “cherry”); then switching equivalence corresponds to color-preserving graph isomorphism. With this, Ferenc was able to construct all Seidel adjacency matrices of order at most 13 (see this paper). He also posed a few interesting questions. My favorite:

Question: How large is the largest system of biangular lines in \mathbb{R}^d?

Here, upper bounds are provided by Delsarte, Goethals and Seidel; you cannot have more that \binom{d+3}{4} biangular lines in \mathbb{R}^d. What sort of lower bounds are within reach?

Emily King – Negative Cliques in Sets of Equiangular Lines

There has been a bunch of work to bound the maximum possible number of equiangular lines using semidefinite programming. In her talk, Emily found new bounds by exploiting substructures. In particular, take the largest subcollection of lines for which there exist vectors with all negative pairwise inner products. Then project the other vectors onto the orthogonal complement of their span to obtain a 2-distance spherical set in a lower dimensional sphere. Then one may apply SDP bounds on such sets to infer bounds on the original number of equiangular lines.

Question: How can one fuse SDP bounds of this sort to produce a single “best” SDP bound?

Emily also discussed our recent work on binders, which are emergent combinatorial structures that we observe in various ETFs. Before describing binders, recall that the spark of an ensemble of vectors is the size of the smallest linearly dependent subcollection. The spark bound (first proved by Donoho and Elad) provides a lower bound on spark in terms of coherence \mu:

spark  \geq 1/\mu + 1

Strangely, we find that many ensembles that minimize coherence simultaneously achieve equality in the spark bound. In this case, the binder is the set of linearly dependent subcollections of size 1/\mu+1. When the ETF exhibits symmetry, the binder inherits this symmetry. In many cases, the binder is a balanced incomplete block design.

Question: What applications do spark-bound equality ETFs have to fast erasure recovery?

Presumably, one can exploit these dependencies for fast erasure robustness, but at the expense of noise robustness. It would be interesting to see the extent of this tradeoff.

Kasso Okoudjou – Grassmannians as universal minimizers of frame potentials

The pth frame potential of a frame \Phi=[\varphi_1\cdots\varphi_n] is the entrywise p-norm of the Gram matrix \Phi^*\Phi. What are the minimizers of the pth frame potential? In the case where p=2, the minimizers are the unit norm tight frames. For p large, the minimizers are the Grassmannian frames. Intuitively, one expects these potentials to encourage a sparse Gram matrix for p small and a flat Gram matrix for p large. In previous work, Kasso and Martin Ehler proved that the case of 3 vectors in \mathbb{R}^2 exhibits a threshold at p=\log_2(3), where after this threshold, the simplex ETFs are the minimizers, and before this threshold, the minimizers amount to orthonormal bases with one of the vectors repeated. This work leveraged techniques from Cohn and Kumar’s paper. In pursuit of more results along these lines, Kasso recently had a couple of undergrads collect a bunch of data for additional cases, and he distilled a few conjectures from this data, and managed to prove some of them. Before mentioning my favorite of these conjectures, first define a (d,k)-configuration to be d+1 vectors in \mathbb{R}^d with k+1 vectors forming a simplex ETF in a k-dimensional subspace, and the remaining d-k vectors forming an orthonormal basis in the orthogonal complement.

Question: Is every minimizer of the pth frame potential a (d,k)-configuration for some k=k(p)?

All evidence suggests that the answer is “yes,” and in particular, the thresholds on p that determine k=k(p) appear to be given by

\displaystyle{p_j = \frac{\log(j+2)-\log(j)}{\log(j+1)-\log(j)}, \qquad j\in\{1,\ldots,d-1\}.}

In hindsight, (d,k)-configurations seem to be rather natural candidates for interpolating between sparse and flat Gram matrices. I’m interested to see what techniques will ultimately crack this problem.

Markus Grassl – Symmetries of Weyl-Heisenberg SIC-POVMs

It seems that an infinite family of SICs might be within striking distance. In particular, the so-called Fibonacci-Lucas SICs enjoy many more symmetries than usual, and it appears that you can embed versions of d-dimensional SICs into certain d(d-2)-dimensional SICs by this paper. Indeed, these observations were used to produce an exact SIC in dimension 323 and a numerical SIC in dimension 844.

Question: Do iterated embeddings produce an infinite family of SICs?

I also wonder about the apparent computational bottleneck in computing numerical SICs of large dimension. In particular, it took a super computer to compute SICs of dimension 122 through 151 (see this paper). For this problem, it seems that the main obstruction is dealing with spurious local minimizers. Talking with Markus, it can take tens of thousands of random initializations to land in a SIC’s basin of attraction. The objective function is given by

\displaystyle{p(\varphi):=\sum_{j,k\in\mathbb{Z}/d\mathbb{Z}}\bigg|\sum_{l\in\mathbb{Z}/d\mathbb{Z}}\varphi(l)\overline{\varphi(j+l)\varphi(k+l)}\varphi(j+k+l)\bigg|^2}

for \varphi\colon\mathbb{Z}/d\mathbb{Z}\rightarrow\mathbb{C} with unit 2-norm. Here, p(\varphi)\geq2/(d+1), with equality precisely when \varphi is the fiducial vector of a SIC. One can interpret this as a function of the outer product \varphi\varphi^*, and so one could conceivably relax to avoid spurious local minima.

Question: Is there a relaxation of the SIC fiducial potential that allows one to compute larger SIC fiducials?

We already have hundreds of numerical SICs, but it would be interesting if SICs provided yet another application of Burer-Monteiro relaxation (say).

One thought on “Tight Frames and Approximation 2018

Leave a comment