Solving underdetermined linear systems is impossible unless you’re given additional information about the solution. The goal of compressed sensing is to solve these systems by assuming the desired solution is sparse. In this vein, one common approach is L1 minimization: Find the minimizer of subject to , where is the true solution (and so is the available data). Here is the standard picture to illustrate why L1 minimization works:

In this case, we applied the sensing matrix , whose null space is the set of scalar multiples of . The vector we are sensing is , and the red line denotes the null space of shifted by (i.e., the set of all such that ). The blue diamond illustrates the smallest L1-ball which intersects this shifted subspace, and we note that this intersection occurs at our signal . As such, L1 minimization encouraged sparsity so well that it recovered the desired signal exactly.

The purpose of this blog entry is to gain a deeper intuition for why L1 minimization works so well. The following (rather obvious) lemma lies at the heart of this intuition:

**Lemma.** Fix some sensing matrix , and for every signal , consider the corresponding feasible set

and ball

.

Then uniquely minimizes subject to if and only if .

The beauty of this lemma is that it recasts exact recovery from L1 minimization purely in terms of convex geometry: A good null space for will be oriented in such a way that its shift by (namely, ) will kiss uniquely at . We will apply this understanding to relate the following two concepts:

**Definition.**

(i) An sensing matrix is said to satisfy the * exact recovery property* for (or -ERP) if every supported on uniquely minimizes subject to .

(ii) An sensing matrix is said to satisfy the * null space property* for (or -NSP) if every satisfies .

The reader who is familiar with compressed sensing is probably aware that a matrix is -ERP for every of size if and only if it’s -NSP for every of size . The following theorem is a more specific version of this result:

**Theorem.** A matrix is -ERP if and only if it’s -NSP.

Before proving this theorem, we note that in our example above, is evidently -ERP, and indeed, is also -NSP since every member of the null space has the form , which satisfies whenever .

* Proof:* We will obtain the contrapositive of each direction.

() Suppose is not -NSP. Then there exists such that . In other words, setting and gives that . Also, since , and since . The following illustrates this situation:

In this figure, is denoted by a dotted red line. Overall, we have (since it must also contain ), and so is not -ERP by the lemma.

() Suppose is not -ERP. Then by the lemma, there exists a vector with support in such that contains some . Pick . Since , we have that is in the null space of , and furthermore, since . The following illustrates this situation:

Evidently, is smaller than , and this can be established in general by adding a convenient form of zero, and then applying the triangle inequality along with the fact that :

As such, is not -NSP.

As an aside, let’s consider how to build -NSP matrices. To this end, the restricted isometry property is commonly used as a sufficient condition for the null space property. As you might expect, there’s an analogous condition for -NSP:

**Definition.** An sensing matrix is said to satisfy the * restricted isometry property* for and (or -RIP) if

for every with support in and every with support size at most .

In words, -RIP matrices preserve distances between vectors supported on and other sparse vectors. By taking , we see that this version of RIP implies the classical version of RIP with sparsity level . As such, this is not much of a weakening of RIP. Regardless, the following result follows from the analysis in the classical proof that RIP implies NSP:

**Theorem.** If is -RIP with , then is -NSP.

Set . Since -RIP implies the classical -RIP, a random matrix will need rows to satisfy -RIP. As such, we’re not saving much in the number of measurements compared to drawing a -RIP matrix to get NSP (i.e., -NSP for every of size simultaneously). However, we can certainly lose the log factor, for example, by measuring with identity basis elements: . This disparity leads to the following problem:

**Problem.** Given a collection of subsets , how large must be for there to exist an matrix which is -NSP for every ?

Depending on , the answer can range from to , and even though the log factor might be negligible, some (real-world) choices for might lead to a much smaller constant (which could be important for certain applications).

\|h_{S^\mathrm{c}}\|_1=\|x^*_{S^\mathrm{c}}\|_1=\|x^*_{S^\mathrm{c}}\|_1+\|x^*_S-x^*_S+x\|_1-\|x\|_1

\leq\|x^*_{S^\mathrm{c}}\|_1+\|x^*_S\|_1+\|x^*_S-x\|_1+\|x\|_1

=\|x^*\|_1+\|h_S\|_1-\|x\|_1\leq\|h_S\|_1.

This part is understood clearly.

i am confused completely to sort out the relation between Xs, hs, hs^c, X*s….please help me out

x and h are vectors, and S and S^c are sets of indices (S^c is the complement of S). x_S denotes the portion of x which is supported on S, i.e., it equals x on S and otherwise equals zero. The definitions of h_S, etc. are similar.

Thank you very much for reply. I am familiar with terminology. I had couple of questions that i am unable to sort out as given in proof of Null Space Property.

1. Why you have taken ||h_sc||= ||x*_sc||?

2. Explain inequalities below that.

x is supported on S, so h looks like x^* on S^c. The inequality follows from the fact that x^* has minimal 1-norm, i.e., \|x^*\|_1 \leq \|x\|_1.

hey, sorry to disturb you again. But believe me, even i tried hard i am not getting rid of that inequality and the step follow below that. If possible give more explanation about this by solving stepwise. How you applied triangle inequality |a+b|<|a|+|b|?

How ||Xs*-Xs*+X|| – ||X|| is less than or equal to ||Xs*||+||Xs*-X||+||X|| ???

I am from signal processing background, so unable to deal with direct steps.

It isn’t — you detected a typo, actually. \|x\|_1 should be negated. I just fixed it — thanks!

||Xs*-Xs*+X|| < ||Xs*||+||-Xs*+X||

but as norm is non negative, we can write same thing as,

||Xs*-Xs*+X|| < ||Xs*||+||Xs*-X|| by considering the fact that ||-Xs*+X|| = ||Xs*-X||.

Am I correct?

Yes, but I would argue this using the absolute homogeneity of the norm instead of its nonnegativity:

http://en.wikipedia.org/wiki/Norm_%28mathematics%29

using quasi triangle inequality |a+b|^T Leq |a|^T + |b|^T

how to prove following inequality?

|a+b|^T – |a|^T Geq(>) -|b|^T ??

This inequality used to prove NSP in Lemma1 by Remi Gribonval in the paper Sparse Representations in Unions of Bases. I really appreciate your help.

Try the substitution x=a+b and y=-b. Then rearrange

|x+y|^T \leq |x|^T + |y|^T

to get

|x|^T – |x+y|^T \geq -|y|^T.

Substituting back produces your inequality.

0 < T <= 1 i.e. l1 norm or below that.

Wowww!!!!!!!!! One last question from NSP:

Suppose y=AX and X* be a solution of l1 minimization problem. X is k-sparse. S be the set of k largest entries of X. h=X*-X belongs to null space of A. then,

||X*|| Leq ||X||

||X*_s|| + ||X*_sc|| Leq ||X_s|| + ||X_sc||

from triangle inequality,

||X_s|| – ||h_s|| + ||h_sc|| – ||X_sc|| Leq ||X_s|| + ||X_sc|| ????

Again confused about step that uses triangle inequality. Please give explanation if possible.

||X*_sc|| = ||h_sc|| ??

||X_sc|| = 0 ??

||X*_s|| = ||X_s|| – ||h_s|| ?? What is reason for this last equality?

Hmm, well first observe that h_S = x*_S – x_S and h_Sc = x*_Sc. Next, reverse triangle inequality gives

||x_S|| – ||h_S||

= ||x_S|| – ||x*_S – x_S||

Leq ||x*_S||.

Thus,

||x_S|| – ||h_S|| + ||h_Sc|| – ||x_Sc||

Leq ||x*_S|| + ||x*_Sc|| – ||x_Sc||

Leq ||x*_S|| + ||x*_Sc||

= ||x*||

Leq ||x||

= ||x_S|| + ||x_Sc||.

If possible then make some arrangement to type equation and to attach pdf.

In the definition of RIP, it must be (2S, delta) RIP instead of (S,delta) because the support of (x-y) belongs to 2S not S. Right?

If matrix A is (2K,delta) RIP then what it implies: 2K-NSP or K-NSP?

According to your proof “Classical RIP Implies NSP”: (2K,delta) RIP implies K-NSP. Right?

How to prove (K,delta) RIP implies K-NSP?

Please do reply….

Yes, the intuition is that (2S,delta)-RIP ensures that the difference of two different S-sparse vectors stays away from the null space, and so it implies S-NSP.

It is possible to conclude S-NSP from (S,delta)-RIP, but the argument is very different. See this paper:

http://www-stat.wharton.upenn.edu/~anrzhang/papers/Optimal%20RIP.pdf

Thanks for great explanation. I just need some references for this theorem, I mean the S-NSP and S-RIP conditions. Thank you very much.

The NSP condition is based on Corollary 3.3 of this paper:

http://people.math.sc.edu/devore/publications/CDDSensing_6.pdf

The RIP condition is based on this paper:

http://statweb.stanford.edu/~candes/papers/RIP.pdf

Thank you for your prompt response. I have a question, according to the first paper you posted, it says the sensing matrix should satisfy NSP when equation 3.8 holds for all T in {1,2,…,n}, in your example, it only satisfies 3.8 when T={2} and T^c={1}. What about the other way around? I mean when T={1} and T^c={2}.

Thank you very mcuh

Good observation. In my example, the matrix actually doesn’t satisfy the null space property in its entirety. This is an issue with having a 1-dimensional null space in 2 dimensions. However, this issue doesn’t typically occur when the ambient dimension is large (and the null space is appropriately small).

Thanks for the report. However, I am confused on the k-NSP and 2k-NSP. Shouldn’t it satisfy 2k-NSP to have k-sparsity recovery? Since 2k-NSP suggests that the 2k-sparse x belonging to the null space of A is zero vector? Why here we are suggesting k-NSP?

Thanks! Please reply:-)

Your intuition is coming from what it takes to ensure that your measurements are different for every k-sparse x. NSP is a condition to further ensure that every k-sparse x can be retrieved by L1 minimization, which certainly requires their measurements to uniquely determine them. Indeed, it is easy to show that Phi doesn’t satisfy k-NSP if there are two different k-sparse vectors x and y with Phi*x=Phi*y.

Dear dustin,

I am new to CS and I am really struggling in some concepts. I will highly appreciate your help! Thanks first for this great article. clarified some things yet I still have some gaps 😦

1- why from the first place we are using the null space ? what does it have to do with the problem.

2- regarding the sensing matrix phi, it should be known at both sender and receiver?

1 – The null space of a matrix A tells you how little you know about the solutions to Ax=b. In the case of compressed sensing, you are told that x is sparse, but if sparse vectors are in the nullspace, you have no hope of recovering them. The nullspace property is a strengthening of this intuition to make L1 minimization work.

2 – Yes, the sensing matrix should be known in order to recover, although there has been some work recently to perform self-calibration in the case where Phi is only known up to certain degrees of freedom. As far as I understand, this is currently the subject of active research. See for example this paper:

https://www.math.ucdavis.edu/~strohmer/papers/2015/SparseLift.pdf

Thanks Dustin for this prompt response 🙂

for question 2, okay I will read the paper

as for question 1, the image is still blurry for me actually

So how does or in what sense is the nullspace property is a strengthening of this intuition to make L1 minimization work?

Seems that I didn’t understand yet the null space property of a matrix

Would you kindly clarify for me ?

and excuse me for disturbance I am really struggling with this point

Well, the nullspace property implies that sparse vectors are not in the nullspace. In particular, if a vector h is only supported on S, then h_{S^c}=0, whereas h_S is nonzero. Compare this to the definition of the nullspace property. Also, the nullspace property makes L1 minization work because it implies exact recovery (as we prove in this blog entry).

the null space property definition says that the L1 norm of the non zero set is less than that of the elements of v supoprted of S^c ,

so because of that I guarantee that the L1 norm on S is the minimum?

I don’t understand your question. The S-nullspace property is a statement about all vectors in the nullspace, whereas the S-exact recovery property is a statement about all vectors supported on S.

Excuse me I have a big mess in my head… What I am not understanding till now even after reading this blog and other videos is that

1- why L1 guarantees a unique sparsest decomposition.

according to what I have in my mind now, the Line of feasible solutions cannot be in the null space of phi else there will exist more than one solution. (2 points of intersection with the L1 ball)

so x0 (the solution) cannot be in the null space of phi… what’s the relation of this with the definition of NSP and RIP.

2- why the transition from the L0 norm to the L1 norm is valid? that is why if phi obeys RIP and X is sufficiently sparse, then X0 is a common solution to both L0 and L1 minimization problems.

So much thanks for your cooperation.

❤

1- The set of feasible solutions is a shifted version of the nullspace. If there are two sparse solutions, then their difference lies in the nullspace and violates the nullspace property (as well as the restricted isometry property).

2- We talk about L0 minimization as the thing we’d like to do (but can’t, computationally), and then L1 minimization is a thing we can do by linear programming. Intuitively, L1 minimization works because minimizing the L1 norm promotes sparsity (as the first figure in this blog entry illustrates). In terms of guarantees, the nullspace property and restricted isometry properties are sufficient conditions on Phi that make it work, and thankfully, random choices of Phi tend to satisfy these properties.

Dear Dustin,

Finally the image is clear for me now, re-read ur comments and blog and watched this video https://www.youtube.com/watch?v=c6OEZQ3Hhp4

One other question if you don’t mind,

what differs when applying CS to continuous time signals and Discrete time signals? and those that are discrete, they are so by their nature?

The theory of compressed sensing has mostly focused on discrete signals, but the applications that I tend to keep in mind are fundamentally continuous: (1) fast MRI scans:

http://statweb.stanford.edu/~donoho/Reports/2007/CSMRI-20071204.pdf

and (2) the single-pixel camera:

http://dsp.rice.edu/cscamera

In order to match the discrete theory with continuous applications, one must wrestle with additional issues such as resolution. For example, how many pixels do you want to represent a given image? In the end, I plan to store the reconstructed image on a computer, so there’s probably no need to worry about continuous-time versions of L1 minimization. (Though there may be some subtleties or other particular applications that I’m failing to consider at the moment.)

What is the actual meaning of M/N in compressive sensing where M represent total number of measurements and n represent length of signal?

This is the “compression ratio” in compressed sensing. The closer this is to zero, the more compressive your sensor is.

But is there any relation between the sampling rule and M/N?

I’m not sure what you mean by the sampling rule, but the main point of compressed sensing is that you can take M/N to be small provided the signal model is sparse.

I guess he means the nyquist rate.

If this is what you mean, then no there is no relation between CS and Nyquist rate.

In Nyquist, the constraint on the signal is that it should be band limited however in CS, it’s constraint on the signal is different. It only should be sparse as dustin said in his reply