Vlad Voroninski recently posted an arXiv preprint with Paul Hand that provides compressed sensing guarantees using a neural net-based generative signal model. This offers some theoretical justification for the shocking empirical results presented in the “Compressed sensing using generative models” paper, which demonstrates compressed sensing with 10 times fewer measurements than conventional compressed sensing. I was especially excited to see this paper, having recently read Michael Elad’s editorial on deep learning. To learn more, I interviewed Vlad (see below); I’ve lightly edited his responses for formatting and hyperlinks:
DGM: What is the origin story of this project? Were you and Paul inspired by the “Compressed sensing using generative models” paper?
VV: I have been working extensively with applied deep learning for the last year or so, and have been inspired by recent applications of deep generative image priors to classical inverse problems, such as the super resolution work by Fei Fei Li et al. Moreover, recent work on regularizing with deep generative priors for synthesizing the preferred inputs to neural activations, by Yosinski et al., made me optimistic that GAN-based generative priors are capturing sophisticated natural image structure (the synthetic images obtained in this paper look incredibly realistic).
Continue reading Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk
Last week, I visited Joey Iverson at the University of Maryland, and we spent a lot of time working through different approaches to Zauner’s conjecture. In general, my relationship with this problem is very similar to Steve Flammia‘s description, as paraphrased by smerkel on Physics Stack Exchange:
[Flammia described] the SIC-POVM problem as a “heartbreaker” because every approach you take seems super promising but then inevitably fizzles out without really giving you a great insight as to why.
Case in point, Joey and I identified a promising approach involving ideas from our association schemes paper. We were fairly optimistic, and Joey even bet me $5 that our approach would work. Needless to say, I now have this keepsake from Joey:
While our failure didn’t offer any great insights (as Flammia predicted), the experience forced me to review the literature on Zauner’s conjecture a lot more carefully. A few things caught my eye, and I’ll discuss them here. Throughout, SIC denotes “symmetric informationally complete line set” and WH denotes “the Weyl-Heisenberg group.”
Continue reading Notes on Zauner’s conjecture
Two years ago, I blogged about Tuan-Yow Chien’s PhD thesis, which proved Zauner’s conjecture in dimension 17. The idea was to exploit certain conjectures on the field structure of SIC-POVM fiducial vectors so as to round numerical solutions to exact solutions. This week, the arXiv announced Chien’s latest paper (coauthored with Appleby, Flammia and Waldron), which extends this work to find exact solutions in 8 new dimensions.
The following line from the introduction caught my eye:
For instance the print-out for exact fiducial 48a occupies almost a thousand A4 pages (font size 9 and narrow margins).
As my previous blog entry illustrated, the description length of SIC-POVM fiducial vectors appears to grow rapidly with . However, it seems that the rate of growth is much better than I originally thought. Here’s a plot of the description lengths of the known fiducial vectors (the new ones due to ACFW17 — available here — appear in red):
Continue reading Zauner’s conjecture is true in dimensions 18, 20, 21, 30, 31, 37, 39 and 43
Soledad Villar recently posted her latest paper on the arXiv (joint work with Afonso Bandeira, Andrew Blumberg and Rachel Ward). This paper reduces an instance of cutting-edge data science (specifically, shape matching and point-cloud comparison) to a semidefinite program, and then investigates fast solvers using non-convex local methods. (Check out her blog for an interactive illustration of the results.) Soledad is on the job market this year, and I read about this paper in her research statement. I wanted to learn more, so I decided to interview her. I’ve lightly edited her responses for formatting and hyperlinks:
Continue reading A polynomial-time relaxation of the Gromov-Hausdorff distance
Joey Iverson recently posted our paper with John Jasper on the arXiv. As the title suggests, this paper explains how to construct optimal line packings (specifically, equiangular tight frames) using association schemes. In particular, we identify many schemes whose adjacency algebra contains the Gram matrix of an ETF. This unifies several existing constructions, and also enabled us to construct the first known infinite family of ETFs that arise from nonabelian groups.
John is on the job market this year, and when reading his research statement, I was struck by his discussion of our paper, so I asked him to expand his treatment to a full blown blog entry. Without further ado, here is John’s guest blog post (which I’ve lightly edited for hyperlinks and formatting):
Continue reading Optimal line packings from association schemes
The polynomial method has made some waves recently (see this and that, for instance), and last week, Boris Alexeev gave a very nice talk on various applications of this method. This post is loosely based on his talk. All errors are my own.
It’s hard to pin down what exactly the polynomial method is. It’s a technique in algebraic extremal combinatorics, where the goal is to provide bounds on the sizes of objects with certain properties. The main idea is to identify the desired cardinality with some complexity measure of an algebraic object (e.g., the dimension of a vector space, the degree of a polynomial, or the rank of a tensor), and then use algebraic techniques to estimate that complexity measure. If at some point you use polynomials, then you might say you applied the polynomial method.
What follows is a series of instances of this meta-method.
Continue reading Introduction to the polynomial method (and other similar things)
Equiangular tight frames (ETFs) are optimal packings of lines through the origin. At the moment, they are the subject of a rapidly growing literature. In fact, there have been quite a few updates since my last post on this subject (less than five months ago), and I’ve revamped the table of ETFs accordingly. What follows is a brief discussion of the various developments:
1. There is an ETF of 76 vectors in
See this paper. Last time, I mentioned a recent proof that there is no ETF of 76 vectors in . It turns out that a complex ETF of this size does exist. To prove this, it actually seems more natural to view the vectors as columns of a matrix whose row vectors sum to zero. As a lower-dimensional example, consider the following matrix:
Continue reading Recent developments in equiangular tight frames, II