Complete Communications Engineering

Image Compression using Compressed Sensing (CS) and Particle Swarm Optimization (PSO) offers an alternative image compression format. The sparsity inherent in images in certain bases makes their interaction with compressed sensing a natural question. Unfortunately, the computational complexity of signal recovery via l1 minimization is overwhelming. PSO offers the promise of faster convergence, and can be optimized even further by utilizing the specific traits of CS.

In the design of an application of PSO, it is generally realized how important it is to define an appropriate function to be optimized. What can make just as much of an impact is the determination of what space the particles live in. In CS, we can use the fact that our desired solution to the optimization problem is k sparse.

Let x be our original signal, an 8 × 8 block of an image. And, let Tx be its image under a known transformation that makes it k sparse, such as the Discrete Cosine Transform (DCT). We then apply a matrix Φ that is the first m rows of a random orthonormal matrix which satisfies the Restricted Isometry Principle (RIP) to obtain our compressed version of the block. Before examining the function that will be minimized to recover x, we will examine how to simplify the search space, which will reduce the complexity. We begin by finding a solution S of the under-determined system ΦTS = ΦTx. Then, any solution y to ΦTy = ΦTx must be of the form y = S + r where r is in the kernel of ΦT. Letting NΦ be the matrix that transforms the appropriate d dimensional space into the kernel of Φ, we will minimize the function ƒ(y’) =∑(TSi + TNΦy’i)2, where Si and y’i represent the ith component of S and y’, respectively, and the summation is over the smallest 64 – kcomponents. Thus, ƒ measures how much T(S + r) fails to be k sparse.

Lower Dimensions Graph
A modification of the original PSO will also increase the convergence speed of our algorithm. In the original algorithm, the velocity is updated by

V ← V + 2*rand1(personalbest – y’) + 2*rand2(globalbest – y’).
 We can increase the convergence speed by adding in a factor that promotes sparsity. Let be U the vector that contains the smallest 64-k components of –TS –TNΦy’, so that TS + TNΦy’ – U is the bestk sparse approximation of TS + TNΦy’. Our velocity update is then given by

V ← V + 2*rand1(personalbest – y’) + 2*rand2(globalbest – y’) + 2*rand3Z,
 where U is the least squares solution of UTNΦZ.

For more information: