Numerically erasure-robust frames

DON’T WORRY: I will talk about short, fat matrices in a bit (specifically, about a paper I recently posted on the arXiv), but first I want to give some cool historical context.

In 1948, Claude E. Shannon authored the founding document of modern information theory: a journal article entitled “A mathematical theory of communication.” I find it to be a very interesting paper, everyone should read it, and I’d like to summarize the first few sections:

Given a class of messages and a communication channel, the main idea is to exploit of their “statistical structure.” Specifically, each message is modeled as a discrete random variable X with distribution p_X. From here, entropy H(X):=-\mathbb{E}[\log_2p_X(X)] is proposed as a measure of uncertainty in the message X. For example, if X is uniformly distributed over n outcomes, then H(X)=\log_2n. This illustrates that entropy also quantifies the amount of information in a realization of X, where information is measured in bits (binary digits). The channel is then viewed as a machine that outputs a random variable Y after receiving the message X, and this machine is completely characterized by the conditional distribution p_{Y|X}. To quantify the channel’s noise, conditional entropy is proposed: H(X|Y):=-\mathbb{E}[\log_2p_{X|Y}(X|Y)]; this measures how much uncertainty tends to remain in the original message X after receiving the noisy message Y. Certainly, a noiseless channel would have H(X|Y)=0, and so the amount of information received in one realization of Y (aka X) would be H(X). Intuitively, any remaining uncertainty in X after receiving Y would decrease this amount, and indeed, the average transmission rate is given by the mutual information I(X;Y):=H(X)-H(X|Y). You could tweak the message distribution (i.e., use code words) in order to increase the average transmission rate, but the best you can do is the channel capacity C:=\sup_{p_X}I(X;Y). The paper’s main result, now called the noisy-channel coding theorem, says that for every transmission rate smaller than C, there exist codes which enable arbitrarily small error probabilities at the receiver.

The insights in this paper laid the foundation for technology-based communication. To date, a whole host of codes have been tailored for various applications, including cell phones, ethernet, wireless communication, digital satellite television, and deep space communication. In these and all other applications I can think of, one can establish some sort of statistical distribution on the set of plausible messages. For example, the letter “e” probably makes up about 13% of the alphabetical characters in this blog entry. Also, the difference between neighboring pixels in a natural image is typically small, and an empirical distribution of these differences can be easily determined from a training set. However, unlike the message, it is not always appropriate to model the communication channel as a random process.

Suppose, for example, that the communication channel is a post office, and suppose that post office employs my arch nemesis, who plots to destroy my long-distance relationship with his sister. I write letters to her, and he intends to discard or even re-write some of my words and phrases to make me look bad. I could go in cahoots with his sister by coming up with some sort of cipher (perhaps we’d make one over the phone), but he’s so evil, he’d likely eavesdrop on the phone call! For such an adversarial channel, I have to assume that my message will arrive with worst-case (as opposed to average-case) noise. Thus, a random model for the channel would be inappropriate, and so Shannon’s theory is not directly applicable.

In that scenario, assuming my enemy will leave more than 75% of the characters untouched, we could defeat him using a Hadamard code. In general, error-correcting codes are the way to go here. Recently, I posted a paper on the arXiv that discusses how frames might be used in the case where a transmitted signal faces an adversary along with additive noise:

http://arxiv.org/abs/1202.4525

Here’s the problem: I’d like to convey a vector x\in\mathbb{C}^M to my friend, and I plan to do this by transmitting some vector y\in\mathbb{C}^N of larger dimension, but there’s an adversary between us who can remove any of the entries of y. This noise process is sometimes called active jamming. Of course, we have to impose some constraint on the adversary, since otherwise he could remove all of the entries and succeed in thwarting my communication. As such, we assume he is only capable of removing a proportion p of the entries. In addition to adversarial erasures, we can expect additive noise due to round-off error, atmospheric effects, etc. To make life easier, let’s decide up front to build y from x according to a linear process: y=F^*x, where F is a short, fat matrix (see, I told you it would come up eventually). To defeat both the adversary and the additive noise, we seek short, fat matrices with the following property (note that p and C here are different from Shannon’s):

Definition. Given p\in[0,1] and C\geq 1, an M\times N matrix F is a (p,C)numerically erasure-robust frame (NERF) if for every \mathcal{K}\subseteq\{1,\ldots,N\} of size K:=(1-p)N, the corresponding M\times K submatrix F_\mathcal{K} has condition number \leq C.

The idea is that the adversary will erase at most pN of the entries of y=F^*x, and so my friend at the receiver will only consider the other (1-p)N entries, indexed by \mathcal{K}. Having \mathcal{K}, my friend will estimate x by applying the Moore-Penrose pseudoinverse: \hat{x}=(F_\mathcal{K}^{}F_\mathcal{K}^*)^{-1}F_\mathcal{K}^{}y. Since F is a NERF, we can rest assured that F_\mathcal{K} will be well-conditioned regardless of which entries the adversary erases, and so \hat{x} will be a decent estimate of x despite the additive noise.

In this paper, we consider an array of methods to construct NERFs. Random matrices are NERFs with N\sim M, and aymptotically maximal equiangular tight frames (ETFs) are NERFs with N\sim M^2. Other constructions, such as group frames, have N\sim M^\alpha for integers \alpha\geq2, but they aren’t as successful. Overall, our constructions achieve erasure rates of the form p<\frac{1}{2}. Interestingly, p can approach \frac{1}{2} in the ETF construction, but not in our random construction.

At the moment, we aren’t convinced that this “one-half barrier” is a mere artifact of our constructions.

Here’s the reasoning: No matrix with \pm1 entries can be a NERF with p\geq\frac{1}{2}. Why? Because the adversary can always delete less than half of the columns and leave a rank-deficient (let alone well-conditioned) matrix: observe the first two rows, in which corresponding entries are either equal or opposite, and simply delete the columns corresponding to the less popular relationship! What’s more, \pm1 matrices are particularly representative of all possible matrices. For instance, random matrix methods tend to apply to \pm1 matrices without loss of effectiveness. It remains to be seen whether the one-half barrier is a true fundamental limit of NERFs. In our paper, we prove some (weaker) fundamental limits, but the one-half barrier remains elusive.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s