Complete Communications Engineering

Compressed sensing (CS) has gained a lot of recognition recently for its ability to reduce complexity at the signal capturing device. However, this is often offset by the increase in complexity at the reconstruction device which can be much larger than the initial decrease. This is because CS recovery is accomplished by solving an ℓ1 minimization problem over the the received signal. For interesting (non-trivial) signals, the order of the signal can ofter be 106 or 107. For example, a 1 megapixel image has, by definition, around 1,000,000 pixels. Reconstruction of such an image can take a couple of minutes using the traditional basis pursuit approach. Gradient projection offers an alternative to this which has the potential to run much faster while still achieving acceptable reconstruction of the signal.

The standard CS reconstruction is accomplished by solving the convex optimization problem

minx1 s.t.  y = Ax
x

where x is the sparse signal of length M, y is the vector of samples length N obtained from the CS sampling and A is the sampling matrix size M × N. In most applications, there will be some noise in y (from quantization, transmission, etc), so the problem can be reformulated as

minx1 s.t.  ‖yAx2< ϵ
x2

which is simply stating that the square of the ℓ2 norm error is less than some small tolerance value. This problem is closely related to the unconstrained optimization problem

min ½‖yAx2 + τx
x21

where τ is a nonnegative parameter. This problem can be reformulated into a bound-constrained quadratic program (BCQP)

mincTz + ½zTBzF(z) s.t.  z ≥ 0
z

where

z =
u
v

 

b = ATy

 

c = τ12n +
b
b

 

B =
ATAATA
ATAATA

 

x = uy, u ≥ 0, v ≥ 0,

though the dimension of this problem is twice that of the original problem, its structure allows it to be solved in a very straightforward manner. Most importantly, the gradient can be calculated as ∇F z) = c + Bz, which only involves one multiplication each by A and AT (assuming c is precomputed only once). This allows us to use any number of gradient descent algorithms, and the most computationally intensive step at any iteration will be a matrix vector multiplication. This approach has been shown to be a factor of 10 or more faster than other state of the art algorithms, while being comparable (and in many cases better) in terms of reconstruction quality.