# Phase retrieval with polarization

It’s been far too long since my last blog post, but now that the linear algebra class I’m teaching is coming to an end, I hope to update a bit more.

Today, I present my latest paper on phase retrieval. My coauthors are Boris Alexeev, Afonso Bandeira and Matt Fickus.

This one is a long time coming, considering I blogged about the main idea almost 6 months ago. But that blog entry only described our approach in the noiseless case, whereas the bulk of the paper actually tackles stability in the noisy case. This is an important stride for phase retrieval, considering the only provably stable alternative is PhaseLift/PhaseCut, while our approach is completely different.

To review, here’s the main idea: We have a collection of vectors that we use to measure our signal. If we could determine the phases of these measurements (up to a global phase factor), then we could reconstruct the signal by least-squares estimation. To get these phases, we exploit certain carefully chosen auxiliary vectors. Specifically, we view each of the original vectors as a vertex in a graph. If two vertices (say $u$ and $v$) are adjacent in the graph, we will also measure the signal using polarized combinations of the corresponding vectors. With the help of the polarization identity, these will determine the relative phase between the measurements corresponding to $u$ and $v$. Since every edge produces relative phase information about the vertices, then if the graph is connected, we can determine all of the phases up to a global phase (just pick a spanning tree and propagate).

The difficulty here is that if the signal is orthogonal to a vertex vector, then relative phase doesn’t make sense. As such, orthogonality has the effect of deleting vertices from the graph. Sure, if you pick your vertex vectors to be full spark, then no $M$-dimensional signal can be orthogonal to $M$ of them, thereby limiting the damage dealt to the graph. But we still need to ensure that the graph will have a large connected component after deleting $M-1$ vertices. Indeed, our measurement process is not adaptive, so our graph must have this robustness property. Luckily, expander graphs provably have this robustness, even with the number of edges scaling linearly with the number of vertices.

Now if there’s noise in the measurements, we have even more to worry about. First of all, we need to think about more than just orthogonality between the signal and vertex vectors. Indeed, if the signal is even nearly orthogonal to a vertex vector, then the corresponding vertex measurement is unreliable, being particularly susceptible to noise. As such, we are inclined to remove such vertices from the graph. This introduces an interesting design problem for the vertex vectors: We don’t want too many to be nearly orthogonal to any given signal. To answer this, our paper introduces a new concept called projective uniformity, which says that for every unit signal $x$, there exists a large collection of vertex vectors whose inner products with $x$ are sufficiently large. I need to think more about how to deterministically get projective uniformity, but in this paper, we were able to get sufficient projective uniformity with $\mathcal{O}(M\log M)$ measurement vectors.

To remove the unreliable vertices, we actually hunt for small (i.e., unreliable) edge measurements. In particular, we look for the smallest edge measurement, remove the corresponding vertices, and repeat as necessary. This process ensures that the relative phases that correspond to the surviving edges are sufficiently reliable pieces of information.

The next thing we do is best motivated by considering a couple of examples. After we prune for reliability, our graph is destroyed a bit. It could be the case that the graph is no longer connected, meaning the vertices can’t agree up to a global phase factor – this is a problem, but we already were aware of this from the noiseless case. As a more subtle example, we might have a cut vertex in the graph, meaning both halves of the graph depend on this vertex to find agreement. This is not a stable scenario, since we require this vertex to be much more reliable than it is guaranteed to be. Rather, for stability, we will need to prune this graph in such a way that makes it particularly well connected, and we accomplish this by iteratively applying spectral clustering.

At this point in the noiseless case, we propagated relative phase information along a spanning tree. In the noisy case, however, such propagation would accumulate significant errors. To avoid this, we use a process that integrates our pieces of relative phase information in a much more democratic fashion, namely angular synchronization. This is a spectral method championed by Amit Singer for cryo-EM applications, and its performance is a function of the connectivity of the graph (which we should expect). For more information about angular synchronization, check out these two papers:

After applying angular synchronization, the vertices have reached a consensus on what each of their phases should be (up to global phase). However, some of the vertex measurements could be particularly large, meaning they are very susceptible to noise in the phase (think $\Delta s=r\Delta\theta$). As such, we remove these vertices after the fact – we waited to do this because they helped us find a consensus in the angular synchronization phase (we can think of them as “mediators”).

Finally, we use the surviving vertex measurements (now with phases) to determine the original signal by least-squares estimation. But this is only as stable as the conditioning of the matrix whose columns are the remaining vertex vectors. For this portion of the analysis, we exploit a recent theory of numerically erasure-robust frames.

In the end, we report a performance guarantee that describes relative reconstruction error in terms of signal-to-noise ratio. This is comparable to other stability results in phase retrieval, but the difference is our use of spectral methods instead of semi-definite programming.