Algebra, geometry, and combinatorics of subspace packings

Last week, the SIAM Conference on Applied Algebraic Geometry hosted a session on “Algebra, geometry, and combinatorics of subspace packings,” organized by Emily King and myself. Sadly, I wasn’t able to attend, but thankfully, most of the speakers gave me permission to post their slides on my blog. Here’s the lineup:

Emily KingAlgebra, Geometry, and Combinatorics of Subspace Packings

Romanos MalikiosisGroup frames, full spark, and other topics

John Jasper Equiangular tight frames from nonabelian groups

Gene KoppSIC-POVM existence and the Stark conjectures

Continue reading Algebra, geometry, and combinatorics of subspace packings

Foundations of Data Science Boot Camp, V

This is the fifth (and final) entry to summarize talks in the “boot camp” week of the program on Foundations of Data Science at the Simons Institute for the Theory of Computing, continuing this post. On Friday, we heard talks from Ilya Razenshteyn and Michael Kapralov. Below, I link videos and provide brief summaries of their talks.

Ilya Razenshteyn — Nearest Neighbor Methods

Continue reading Foundations of Data Science Boot Camp, V

Foundations of Data Science Boot Camp, IV

This is the fourth entry to summarize talks in the “boot camp” week of the program on Foundations of Data Science at the Simons Institute for the Theory of Computing, continuing this post. On Thursday, we heard talks from Santosh Vempala and Ilias Diakonikolas. Below, I link videos and provide brief summaries of their talks.

Santosh Vempala — High Dimensional Geometry and Concentration

Continue reading Foundations of Data Science Boot Camp, IV

Foundations of Data Science Boot Camp, III

 

This is the third entry to summarize talks in the “boot camp” week of the program on Foundations of Data Science at the Simons Institute for the Theory of Computing, continuing this post. On Wednesday, we heard talks from Fred Roosta and Will Fithian. Below, I link videos and provide brief summaries of their talks.

Fred Roosta — Stochastic Second-Order Optimization Methods

Continue reading Foundations of Data Science Boot Camp, III

Foundations of Data Science Boot Camp, II

 

This is the second entry to summarize talks in the “boot camp” week of the program on Foundations of Data Science at the Simons Institute for the Theory of Computing, continuing this post. On Tuesday, we heard talks from Ken Clarkson, Rachel Ward, and Michael Mahoney. Below, I link videos and provide brief summaries of their talks.

Ken Clarkson — Sketching for Linear Algebra: Randomized Hadamard, Kernel Methods

Continue reading Foundations of Data Science Boot Camp, II

Foundations of Data Science Boot Camp

I’m spending the semester at the Simons Institute for the Theory of Computing as part of the program on Foundations of Data Science. This was the first day of the “boot camp” week, which was organized to acquaint program participants with the key themes of the program. Today, we heard talks from Ravi Kannan and David Woodruff. Below, I link videos and provide brief summaries of their talks.

Ravi Kannan — Foundations of Data Science

Continue reading Foundations of Data Science Boot Camp

Recent Advances in Packing

Last week, I co-organized (with Joey Iverson and John Jasper) a special session on “Recent Advances in Packing” for the AMS Spring Central Sectional Meeting at the Ohio State University. All told, our session had 13 talks that covered various aspects of packing, such as sphere packing, packing points in projective space, applications to quantum physics, and connections with combinatorics. It was a great time! And after the talks, we learned how to throw axes!

What follows is the list of speakers and links to their slides. (I anticipate referencing these slides quite a bit in the near future.) Thanks to all who participated!

Continue reading Recent Advances in Packing

Tight Frames and Approximation 2018

I just returned from an amazing workshop in New Zealand organized by Shayne Waldron. The talks and activities were both phenomenal! Here’s a photo by Emily King that accurately conveys the juxtaposition:

28059469_10112541778108014_4963753337247640734_n

A few of the talks gave me a lot to think about, and I wanted to take a moment to record some of these ideas.

Continue reading Tight Frames and Approximation 2018

Talks from the Summer of ’17

This summer, I participated in several interesting conferences. This entry documents my slides and describes a few of my favorite talks from the summer. Here are links to my talks:

UPDATE: SIAM AG17 just posted a video of my talk.

Now for my favorite talks from FoCM, ILAS, SIAM AG17 and SPIE:

Ben RechtUnderstanding deep learning requires rethinking generalization

In machine learning, you hope to fit a model so as to be good at prediction. To do this, you fit to a training set and then evaluate with a test set. In general, if a simple model fits a large training set pretty well, you can expect the fit to generalize, meaning it will also fit the test set. By conventional wisdom, if the model happens to fit the training set exactly, then your model is probably not simple enough, meaning it will not fit the test set very well. According to Ben, this conventional wisdom is wrong. He demonstrates this by presenting some observations he made while training neural nets. In particular, he allowed the number of parameters to far exceed the size of the training set, and in doing so, he fit the training set exactly, and yet he still managed to fit the test set well. He suggested that generalization was successful here because stochastic gradient descent implicitly regularizes. For reference, in the linear case, stochastic gradient descent (aka the randomized Kaczmarz method) finds the solution of minimal 2-norm, and it converges faster when the optimal solution has smaller 2-norm. Along these lines, Ben has some work to demonstrate that even in the nonlinear case, fast convergence implies generalization.

Continue reading Talks from the Summer of ’17

SOFT 2016: Summer of Frame Theory

This month, several experts in frame theory will be visiting my department, and so Matt Fickus and I decided to organize a workshop in the style of AIM. Considering the recent progress we’ve made on equiangular tight frames (ETFs) — namely, one, two, three, and four — we are hoping this workshop will spur further progress in this area. To kick off the month, I asked a few people to prepare hour-long chalk talks, and what follows are the extended abstracts:

1. Introduction to ETFs (Dustin G. Mixon)

Given a d-dimensional Hilbert space space \mathbb{H}_d and a positive integer n, we are interested in packing n lines through the origin so that the interior angle between any two is as large as possible. It is convenient to represent each line by a unit vector that spans the line, and in doing so, the problem amounts to finding unit vectors \{\varphi_i\}_{i=1}^n that minimize coherence:

\displaystyle{\mu\Big(\{\varphi_i\}_{i=1}^n\Big)=\max_{i\neq j}|\langle \varphi_i,\varphi_j\rangle|.}

This minimization amounts to a nonconvex optimization problem. To construct provably optimal packings, one must prove a lower bound on \mu for a given n and spatial dimension d, and then construct an ensemble which meets equality in that bound. To date, we know of three bounds that are sharp:

Continue reading SOFT 2016: Summer of Frame Theory