Jesse Peterson and I recently arxiv’d our paper for Wavelets and Sparsity XVI at SPIE this year. This paper focuses on learning functions of the form

where is small, , and . Notice that any such Boolean function can be viewed as a labeling function of strings of bits, and so learning the function from labeled instances amounts to a binary classification problem.

If we identify with , then the ‘s are essentially the big entries of the Walsh–Hadamard transform of , and these entries are indexed by the ‘s. As such, functions of the form are essentially the Boolean functions of concentrated spectra. These functions have been shown to well approximate the Boolean functions with sufficiently simple circuit implementations (e.g., see one, two, three), and given the strong resemblance between Boolean circuits and neural networks, the following hypothesis seems plausible: