Recent advances in mathematical data science

Part of the experience of giving a talk at Oberwolfach is documentation. First, they ask you to handwrite the abstract of your talk into a notebook of sorts for safekeeping. Later, they ask you to tex up an extended abstract for further documentation. This time, I gave a longer version of my SPIE talk (here are the slides). Since I posted my extended abstract on my blog last time, I figured I’d do it again:

This talk describes recent work on three different problems of interest in mathematical data science, namely, compressive classification, k-means clustering, and deep learning. (Based on three papers: one, two, three.)

First, compressive classification is a problem that comes on the heels of compressive sensing. In compressive sensing, one exploits the underlying structure of a signal class in order to exactly reconstruct any signal from the class given very few linear measurements of the signal. However, many applications do not require an exact reconstruction of the image, but rather a classification of that image (for example, is this a picture of a cat, or of a dog?). As such, it makes intuitive sense that the classification task might succeed given far fewer measurements than are necessary for compressive sensing.

Continue reading Recent advances in mathematical data science

Applied Harmonic Analysis and Sparse Approximation

Last week, I attended this workshop at Oberwolfach. The weather was great, and I was rather impressed by the quality of the talks. Throughout the workshop, I paid special attention to the various problems that different people are thinking about. In what follows, I briefly discuss several of these problems.

Continue reading Applied Harmonic Analysis and Sparse Approximation

Frames and Combinatorial & Algebraic Geometry

This last week, I attended a workshop at Universitat Bremen on Frames and Combinatorial & Algebraic Geometry. The organizers (Emily King and Eva Maria Feichtner) did a phenomenal job, and I was impressed by the quality of the talks (see this page for the slides). Below are a few of the things that we discussed throughout the workshop:

— Strohmer’s Problem —

At this year’s SampTA conference, Thomas Strohmer posed an interesting frame theory problem during his plenary talk. On the first day of the workshop, Ole Christensen suggested that we study the problem, and it ended up being one of our main focuses. Here’s a technical statement of the problem sans frame-theoretic jargon:

Strohmer’s Problem. Let A be a known n\times d matrix, let D be an unknown n\times n diagonal matrix, and let X be an unknown d\times m matrix. Under what conditions does y=DAX uniquely determine D and/or X?

This problem is motivated by the real-world problem of self-calibration (see for example this blog entry). In particular, the columns of X represent unknown test signals, whereas D represents unknown parameters of a linear sensor DA. The goal of self-calibration is to calibrate your sensor without requiring knowledge of the input (this is particularly important for applications like deep-space imaging).

Continue reading Frames and Combinatorial & Algebraic Geometry

A relaxation of deep learning?

Jesse Peterson and I recently arxiv’d our paper for Wavelets and Sparsity XVI at SPIE this year. This paper focuses on learning functions f\colon\{\pm1\}^n\rightarrow\{\pm1\} of the form

\displaystyle{f(x_1,\ldots,x_n) = \mathrm{sign}\bigg(\sum_{i=1}^ka_i\prod_{j\in S_i}x_j\bigg)}, \qquad (*)

where k is small, a_1,\ldots,a_k\in\mathbb{R}, and S_1,\ldots,S_k\subseteq\{1,\ldots,n\}. Notice that any such Boolean function can be viewed as a labeling function of strings of n bits, and so learning the function from labeled instances amounts to a binary classification problem.

If we identify \{\pm1\}^n with \mathbb{Z}_2^n, then the a_i‘s are essentially the big entries of the Walsh–Hadamard transform of f, and these entries are indexed by the S_i‘s. As such, functions of the form (*) are essentially the Boolean functions of concentrated spectra. These functions have been shown to well approximate the Boolean functions with sufficiently simple circuit implementations (e.g., see one, two, three), and given the strong resemblance between Boolean circuits and neural networks, the following hypothesis seems plausible:

Continue reading A relaxation of deep learning?

Conjectures from SampTA

Back in May, I attended this year’s SampTA at American University. I spoke in a special session on phase retrieval, and as luck would have it, Cynthia Vinzant spoke in the same session about her recent solution of the 4M-4 conjecture. As you might expect, I took a moment during my talk to present the award I promised for the solution:


Recall that Cynthia (and coauthors) first proved part (a) of the conjecture, and then recently disproved part (b). During her talk, she also provided a refinement of part (b). Before stating the conjecture, recall that injectivity of the mapping x\bmod\mathbb{T}\mapsto |Ax|^2 is a property of the column space \mathrm{im}(A).

Continue reading Conjectures from SampTA

Three Paper Announcements

I’ve been pretty busy lately with writing and researching with visitors. These announcements serve as a quick summary of what I’ve been up to:

1. Tables of the existence of equiangular tight frames (with Matthew Fickus). Today, there’s quite a bit known about equiangular tight frames (ETFs), but what is known seems to be scattered across different papers. This paper surveys everything that is known, and tabulates all of the known real and complex ETFs with sufficiently few vectors in sufficiently small dimension. The tables were generated by coding up existence theorems in MATLAB so as to minimize errors. This serves as a “solution” to problem 21 in this documentation of the open problems discussed at the AIM workshop Frame theory intersects geometry. Recently, Matt and I have made a few ETF discoveries with John and Jesse, so you can expect this table to be updated after we announce these discoveries in the coming months.

Continue reading Three Paper Announcements

The Signal and the Noise

I recently finished Nate Silver‘s famous book. Some parts were more fun to read than others, but overall, it was worth the read. I was impressed by Nate’s apparently vast perspective, and he did a good job of pointing out how bad we are at predicting certain things (and explaining some of the bottlenecks).

Based on the reading, here’s a brief list of stars that need to align in order to succeed at prediction:

Continue reading The Signal and the Noise