Last summer, I launched an online seminar with Joey Iverson, John Jasper, and Emily King on the theory and applications of harmonic analysis, combinatorics, and algebra. We meet on Tuesdays at 1pm (Eastern time).
At the end of his recent CodEx talk, Gene Kopp posed a problem with a prize attached to it. I was excited to learn about this, so I enlisted both Gene Kopp and Mark Magsino to help me write this blog entry to provide additional details.
First, let denote the set of matrices in such that
Here, denotes conjugate transpose, denotes entrywise squared modulus, denotes the identity matrix, and denotes the all-ones matrix. In words, the columns of form an equiangular tight frame (ETF) for of size .
Later this month, Hans Parshall will participate in a summer school on “Sphere Packings and Optimal Configurations.” In preparation for this event, Hans was assigned the task of writing lecture notes that summarize the main results of the following paper:
P. Delsarte, J. M. Goethals, J. J. Seidel,
Geometriae Dedicata 6 (1977) 363–388.
I found Hans’ notes to be particularly helpful, so I’m posting them here with his permission. I’ve lightly edited his notes for formatting and hyperlinks.
Without further ado:
Emily King recently launched an online competition to find the best packings of points in complex projective space. The so-called Game of Sloanes is concerned with packing points in for and for . John Jasper, Emily King and I collaborated to make the baseline for this competition by curating various packings from the literature and then numerically optimizing sub-optimal packings. See our paper for more information:
If you have a packing that improves upon the current leader board, you can submit your packing to the following email address:
asongofvectorsandangles [at] gmail [dot] com
In this competition, you can win money if you find a new packing that achieves equality in the Welch bound; see this paper for a survey of these so-called equiangular tight frames (ETFs).
Let denote the field with elements, and let denote the multiplicative subgroup of quadratic residues. For each prime , we consider the Paley graph with vertex set , where two vertices are adjacent whenever their difference resides in . For example, the following illustration from Wikipedia depicts :
The purpose of this blog entry is to discuss recent observations regarding the Paley graph.
Last week, the SIAM Conference on Applied Algebraic Geometry hosted a session on “Algebra, geometry, and combinatorics of subspace packings,” organized by Emily King and myself. Sadly, I wasn’t able to attend, but thankfully, most of the speakers gave me permission to post their slides on my blog. Here’s the lineup:
Last week, I co-organized (with Joey Iverson and John Jasper) a special session on “Recent Advances in Packing” for the AMS Spring Central Sectional Meeting at the Ohio State University. All told, our session had 13 talks that covered various aspects of packing, such as sphere packing, packing points in projective space, applications to quantum physics, and connections with combinatorics. It was a great time! And after the talks, we learned how to throw axes!
What follows is the list of speakers and links to their slides. (I anticipate referencing these slides quite a bit in the near future.) Thanks to all who participated!
I just returned from an amazing workshop in New Zealand organized by Shayne Waldron. The talks and activities were both phenomenal! Here’s a photo by Emily King that accurately conveys the juxtaposition:
A few of the talks gave me a lot to think about, and I wanted to take a moment to record some of these ideas.
Joey Iverson recently posted our latest paper with John Jasper on the arXiv. This paper can be viewed as a sequel of sorts to our previous paper, in which we introduced the idea of hunting for Gram matrices of equiangular tight frames (ETFs) in the adjacency algebras of association schemes, specifically group schemes. In this new paper, we focus on the so-called Schurian schemes. This proved to be a particularly fruitful restriction: We found an alternate construction of Hoggar’s lines, we found an explicit representation of the “elusive” packing from the real packings paper (based on a private tip from Henry Cohn), we found an packing involving the Mathieu group (this one beating the corresponding packing in Sloane’s database), we found some low-dimensional mutually unbiased bases, and we recovered nearly all small sized ETFs. In addition, we constructed the first known infinite family of ETFs with Heisenberg symmetry; while these aren’t SIC-POVMs, we suspect they are related to the objects of interest in Zauner’s conjecture (as in this paper, for example). This blog entry briefly describes the main ideas in the paper.
This summer, I participated in several interesting conferences. This entry documents my slides and describes a few of my favorite talks from the summer. Here are links to my talks:
- Packings in real projective spaces, FoCM and SPIE
- Explicit restricted isometries, ILAS
- Probably certifiably correct k-means clustering, ILAS
- Equiangular tight frames from association schemes, SIAM AG17
- Open problems in finite frame theory, SIAM AG17
UPDATE: SIAM AG17 just posted a video of my talk.
Now for my favorite talks from FoCM, ILAS, SIAM AG17 and SPIE:
In machine learning, you hope to fit a model so as to be good at prediction. To do this, you fit to a training set and then evaluate with a test set. In general, if a simple model fits a large training set pretty well, you can expect the fit to generalize, meaning it will also fit the test set. By conventional wisdom, if the model happens to fit the training set exactly, then your model is probably not simple enough, meaning it will not fit the test set very well. According to Ben, this conventional wisdom is wrong. He demonstrates this by presenting some observations he made while training neural nets. In particular, he allowed the number of parameters to far exceed the size of the training set, and in doing so, he fit the training set exactly, and yet he still managed to fit the test set well. He suggested that generalization was successful here because stochastic gradient descent implicitly regularizes. For reference, in the linear case, stochastic gradient descent (aka the randomized Kaczmarz method) finds the solution of minimal 2-norm, and it converges faster when the optimal solution has smaller 2-norm. Along these lines, Ben has some work to demonstrate that even in the nonlinear case, fast convergence implies generalization.