## What's Compressive Sensing?

Let's say we have something we want to measure and a sensor to measure it. We need some power to measure that quantity for a certain amount of time. We have a finite power source, so **lower power** consumption is desired. We will want to store this information somehow. We may want to store it on the device we use to measure or we may want to transmit the information and store it elsewhere.

Linear algebra helps us describe the observations (measurements) as a set of linear equations.

**A⋅x = b**where

**b**is an m-by-1 vector of observed values

**x**is an n-by-1 vector of actual values

**A**is an m-by-n sensing matrix

We have m observations of n variables. The picture above is an illustration of the matrix math that the equation **Ax=b** represents. A red row of **A** combines with the blue column of **x** to produce a single purple entry in **b**. Likewise a yellow row combined with the blue column correspond to the value at a single green entry in **b**.

Usually, we know **A** and **b** and want to find **x**. If m=n and **A** is **not singular** (that is, it's rows and columns are not linear comvinations of one another),

**x**=

**A**

^{-1}⋅

**b**

But this is not always the case and some times, even when it is, the above equation is hard to compute.

## Sparsity and Recovering the Data

Candes discovered [1] that if we can be sure that **x** is **sparse** (has a lot of 0's in it), then we can recover **x** much more simply, with a **linear program** (LP). Linear programs find the best solution to an undetermined system of equations. For compressive sensing, the measurement matrix represents the coefficients in the system of equations, the measurements represent the constant terms, and we seek the best (sparsest) value of the variables that satisfy the system of equations.

**x**|

_{1}∋

**A**⋅

**x**=

**b**

This says that the solution to **A⋅x = b** is the **x** with the smallest sum of the magnitude of its components. This is called **L1-norm minimization**, because |**x**|_{1} is the L1-norm, or first norm of **x**. In plain English, the above equation says to find **x**, satisfying **Ax=b**, with the minimum sum of the magnitude of its components. This **x** is the data we are trying to recover from our measurements. This is not the only way to recover the data; there are others. This one is also called **basis pursuit**.

## The Sensing Matrix

To receover the data in the way outlined above, **x** has to be **sparse**, that is, it must have lots of 0's in it. If **x** is not sparse then L1-norm minimization will not do a good job recovering the data, but if **x** is sparse, then it will.

We can make **x** sparse by bringing a sparsifying matrix, **S**, into the equation.

**S⋅x = w**where

**S**is the sparsifying matrix

**x**is the dense data vector

**w**is the sparse data vector

So **S** is a matrix, which can be dense, that, when it multiplies **x** on the left, results in a sparse vector, **w**. In the figure above, light grey rows of **S** are combining with the blue column vector, **x** to produce white entries in **w**. Think of these as visually representing 0's in the **w** vector, making it sparse. The red and yellow rows of **S** combine with **x** to produce the purple and green elements of **w**, representing nonzero entries.

Now that we have fewer nonzero entries, what happens if one of them gets corrupted? Have we made our measurement system more sensitive to noise? Not if we make sure that the sensing matrix is **random** [2], in addition to making **x** sparse. A good sensing matrix is comprised of both a sparsifying matrix, **S** and a random matrix, we'll call **T**.

**A = T ⋅ S**

T ⋅ w = b

T ⋅ w = b

For many applications, **S** can be the identity matrix and **T** can be a Gaussian-distributed matrix. This changes the l1-norm minimization data recovery to

**x**|

_{1}subject to

**T ⋅ x = b**

## Low Power

At Chesney Research, we are interested in compressive sensing as a way of dramatically reducing the power consumed by a sensor. By sampling only the data needed, a compressive sensing system makes much more efficient use of power than a traditional system based on Nyquist sampling. A Nyquist-based system acquires all of the possible bandwidth the signal of interest can occupy and ends up throwing most of it away when the system needs to transmit or interpret the data. With compressive sensing, it is possible to build systems that only acquire the samples that are needed from the signal of interest. When running off of a finite energy supply, like a battery, this allows more power-efficient sensing, enabling applications not previously possible.

## Combinatorial Testing

Igor Carron, who writes a blog dedicated to coompressive sensing, has a good introductory description of compressive sensing posted on Quora. In his answer he draws an analogy between compressive sensing and group testing [3]. Group testing is a kind of combinatorial testing. Combinatorial testing, or sampling, is when you sample combinations of variables, instead of each of the variables themselves independently to get a complete picture of the thing you are trying to study. This is particularly useful when only a few of your variables actually matter, that is, you are sampling something sparse. It allows you to reconstruct the thing you are sampling with great fidelity using far fewer measurements.

## References

[1] E. Candes, "Compressive Sampling," *Proceedings of the International Congress of Mathematicians*. Madrid, Spain, 2006.

[2] Baranuik, R. et al. *An Introduction to Compressive Sensing*. http://cnx.org/content/col11133/latest/, 2011.

[2] Carron, I. "What is compressive sensing in laymans terms", *Quora*. March 19, 2011.