This is the fifth (and final) entry to summarize talks in the “boot camp” week of the program on Foundations of Data Science at the Simons Institute for the Theory of Computing, continuing this post. On Friday, we heard talks from Ilya Razenshteyn and Michael Kapralov. Below, I link videos and provide brief summaries of their talks.
Ilya Razenshteyn — Nearest Neighbor Methods
The nearest neighbor search problem first preprocesses n points P in some metric space with a distance scale r>0, and then when queried a new point q in the metric space, the output should be a member of P that is within r of q. To solve this problem, the best known solution leans on substantial preprocessing that requires exponential space. However, many settings don’t require the output to be within r of q, but within cr of q for some approximation parameter c>1. This motivates approximate nearest neighbor (ANN) search. In his talk, Ilya discussed data-oblivious methods and various data-aware methods. The talk moved from the Hamming metric space, to , to , to , and finally to general metric spaces.
To solve ANN for the Hamming metric space, randomly sample k coordinates, and consider all points that exactly match q in these coordinates. We can scale k so that there are O(1) points that match this specification, and then we determine which of these points is closest to q. We may repeat this process times to succeed with probability 0.99. In general, one might consider a locality-sensitive hash (LSH), which is a random partition of the metric space such that nearby points collide with high probability and far-away points are separated with high probability. The above coordinate-sampling hash is an example of LSH, and can be extended to . For the sphere in , one may partition according to random hyperplanes (producing intuitive, but sub-optimal, results), or draw points at random from the unit sphere and then carving out space according to their Voronoi regions.
For a data-dependent alternative, we first observe that Voronoi LSH works best when the data points are well distributed on the sphere. If instead, the data has clustered portions of data, we may remove those clusters until the remaining data set is well distributed on the sphere and apply Voronoi LSH. Then we can recurse on the clusters. See this paper for more information.
For more general metrics, one is inclined to leverage a bi-Lipschitz embedding into or , but this is not feasible for many metric spaces. Sometimes, it’s easier to embed into , and for this space, there is a data-dependent ANN algorithm for (see this paper). This is based on a fundamental dichotomy: for every n-point dataset, there is either a cube of size containing points, or there exists a coordinate that splits the dataset into balanced parts. This dichotomy suggests a recursive ANN algorithm.
The above dichotomy can be replicated for general metric spaces. Define the cutting modulus to be the smallest number such that for every n-vertex graph embedding into the space such that edges have length at most K, either there is a ball of radius containing vertices, or the graph has an -sparse cut. While the cutting modulus is difficult to compute for general metric spaces, it brings a nice intuition: ANN is “easy” for metric spaces that don’t contain large expanders. See this paper for more information. Unfortunately, this result uses the cell-probe model of computation, which can substantially underestimate runtime. In order to move beyond this model, one would need to somehow ensure that there exist “nice cuts” at each iteration, e.g., the coordinate-based cuts in the original case.
Michael Kapralov — Data streams
In many applications, the dataset is so large that we can only look at the data once, and we are stuck with much smaller local memory. What sort of statistics about the data can be approximated with such constraints?
The first problem of this sort that Michael discussed was the distinct elements problem. Here, we are told that every member of the data lies in , and given a single pass of the data with only storage, we are expected to approximate the number of distinct elements seen (i.e., the size of the support of the histogram of the data) within a factor of and with success probability . To solve this, first pass to a decision version of the problem: Can you tell whether the number bigger than or smaller than for some T? To solve this, just pick with iid. Then we can maintain a count of how much of the data lies in S. (A positive count is highly unlikely if the desired number is less than T.) We can boost this signal by taking independent choices of S.
Now consider a problem in which we stream edges in a graph, and after seeing all of the edges, we are asked to approximate the size of some cut in the graph. (The cut query comes after seeing the edges!) Consider the matrix B with rows indexed by pairs of vertices and columns indexed by vertices. Say the row of B indexed by is if is an edge in the graph, and zero otherwise (the overall sign doesn’t matter here). Then for every subcollection of vertices, the number of edges in the graph between and equals the squared 2-norm of , which can be maintained with the help of a JL projection S with random entries (known as the AMS sketch): . As such, we may maintain the matrix SB through one pass of the data, and then apply to the result after receiving a cut query C.
The second half of the talk covered sketching methods for quantitative versions of these problems. For example, suppose you wanted to keep track of the k “heavy hitters” in the data’s histogram, i.e., the k members of the support of the histogram (under the assumption that they account for the bulk of the histogram’s energy). Then one may sketch using the Count Sketch algorithm. Going back to graphs, suppose you wanted to compute a graph sparsifer. Michael discussed how one can leverage graph sketches to iteratively improve crude sparsifiers by estimating “heavy hitter” edges in terms of effective resistance.