Phone Vision 16 – Binary Images

9 03 2011

When we are creating a binary image we are defining objects of interest.  Defining these objects is very specific to the task at hand.  In some cases a simple threshold will work while others require a more complex approach such as color segmentation, edge detection, or something even more sinister.

Thresholding

The most common way to create a binary image from a grayscale image is to pick an intensity threshold (also known as gray-level segmentation).  Everything below the threshold becomes a pixel of interest (black).  Everything else is not (white).  Let’s explore a few methods for picking that threshold.

The Image

Interestingly, this picture is a white piece of poster board (not blue) with some dark puzzle pieces on it.  If you look closely you can see the famous HD7 pink tint.  The image on the right is simply converted to grayscale using the average of the RGB values.

imageimage

Mean Threshold

Almost every time an image processing problem arises and a simple average is a possible answer my gut tells me to go for it.  In this case, my gut is going to lead me astray, but first some code:

WriteableBitmap ToAverageBinary(WriteableBitmap grayscale)

{

    WriteableBitmap binary =

        new WriteableBitmap(grayscale.PixelWidth,

            grayscale.PixelHeight);

 

 

    int[] histogramData = new int[256];

    int maxCount = 0;

 

    //first we will determine the histogram

    //for the grayscale image

    for (int pixelIndex = 0;

        pixelIndex < grayscale.Pixels.Length;

        pixelIndex++)

    {

        byte intensity = (byte)grayscale.Pixels[pixelIndex];

 

        //simply add another to the count

        //for that intensity

        histogramData[intensity]++;

 

        if (histogramData[intensity] > maxCount)

        {

            maxCount = histogramData[intensity];

        }

    }

 

    //now we need to figure out the average intensity

    long average = 0;

    for (int intensity = 0; intensity < 256; intensity++)

    {

        average += intensity * histogramData[intensity];

    }

 

    //this is our threshold

    average /= grayscale.Pixels.Length;

 

    for (int pixelIndex = 0;

        pixelIndex < grayscale.Pixels.Length;

        pixelIndex++)

    {

        byte intensity = (byte)grayscale.Pixels[pixelIndex];

 

        //now we’re going to set the pixels

        //greater than or equal to the average

        //to white and everything else to black

        if (intensity >= average)

        {

            intensity = 255;

            unchecked { binary.Pixels[pixelIndex] = (int)0xFFFFFFFF; }

        }

        else

        {

            intensity = 0;

            unchecked { binary.Pixels[pixelIndex] = (int)0xFF000000; }

        }

    }

 

    //note that this is the ORIGINAL histogram

    //not the histogram for the binary image

    PlotHistogram(histogramData, maxCount);

 

    //this is a line to show where the

    //average is relative to the rest of

    //the histogram

    Line averageLine = new Line();

    averageLine.X1 = HistogramPlot.Width * average / 255;

    averageLine.X2 = HistogramPlot.Width * average / 255;

    averageLine.Y1 = 0;

    averageLine.Y2 = HistogramPlot.Height;

    averageLine.Stroke = new SolidColorBrush(Colors.Magenta);

    averageLine.StrokeThickness = 2;

    averageLine.StrokeDashArray = new DoubleCollection() { 5, 2.5 };

    HistogramPlot.Children.Add(averageLine);

    return binary;

}

image

 

While the puzzle pieces came through loud and clear so did a lot of garbage.  Notice how the average line (magenta) splits the peak?  This is going to lead to a large number of false black pixels.  What we want is an automated technique for shifting the threshold left.

 

Two Peak

 

If you look at the original grayscale image you’ll notice that the puzzle pieces seem to be close to the same intensity while the background is another. This is represented in the histogram by the large peak (the background) and an almost imperceptible peak toward the dark end of the spectrum (the puzzle pieces).

 

Finding the large peak is trivial.  It’s just the highest occurring intensity.  The small peak on the other hand is a little trickier.  It’s not the second most frequent intensity – that’s right next to the largest peak.  A little trick is to give a higher weight to intensities that are far from the highest peak.  To do this we multiply the intensity count at each point by the square of its distance to the peak.  That’s a mouthful, but it’s pretty easy.  Here’s the code:

 

int secondPeak = 0;

long secondPeakCount = 0;

for (int intensity = 0; intensity < 256; intensity++)

{

    long adjustedPeakCount =

        (long)(histogramData[intensity]*Math.Pow(intensity-maxPeak, 2));

    if (adjustedPeakCount > secondPeakCount)

    {

        secondPeak = intensity;

        secondPeakCount = adjustedPeakCount;

    }

}

So we calculate an adjusted count for each intensity.  By multiplying it by the square of the distance we give higher counts to those further from the first peak.  Amazingly, this works even in this case where the second peak is so small.

image

Wow!  Those results are so much better.  Notice I marked the two peaks with green and the new threshold is in magenta.

Summary

Don’t use the mean.  You might get lucky in a few cases, but in general you’ll end up in a messy situation.  That said, the two-peak solution will work well in cases where your objects are of a consistent color and that color is separate from the background.  I created this photo because I knew it would have good results.  I highly recommend you try these techniques out on your own photos so you can get a feel for when they will work and when they won’t.

We will most likely revisit different thresholding techniques for different situations when they arise, but for now we have a binary image so we’re going to see what we can do with it.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2016%20-%20Binary%20Images.zip

Up Next: Sets





Phone Vision 15 – Median Filters

1 03 2011

Filters, filters, filters… Tired of them yet?  Me too.  After this we’ll leave them alone for a while.  I promise.  Until then, we have work to do.

What is a median filter?  If you understand the average filter (or mean filter) then the median filter should give you no troubles at all – assuming you know what the median is.  We’ll start there.

What is the Median?

The median is the middle number of a sample.  To find it, you sort your list and pick the middle item.  Yes, it really is that simple.

Here is a random set of 9 numbers:

image

Sort them:

image

Pick the middle value:

image

Like I said, it’s simple.

Why use median instead of mean?

The median is particularly effective against certain types of noise – specifically “salt and pepper” noise (black and white specks).  Let’s assume this is the neighborhood we’re dealing with:

image

Performing the mean gives:

image

image

While the median returns:

image

image

If we expect that the 0 values are noise then the median is probably a much closer estimate.

Code

private WriteableBitmap MedianFilter(WriteableBitmap grayscale, int radius)

{

    // we are still going to create a new image

    // because we don’t want to modify the

    // old image as we are processing it

    WriteableBitmap filtered =

        new WriteableBitmap(

            grayscale.PixelWidth,

            grayscale.PixelHeight);

 

    // boiler plate code for our

    // histogram stuff

    int[] histogram = new int[256];

    int maxIntensity = 0;

 

    // the math is still easier if we create two loops

    // instead of one

    for (int y = 0; y < grayscale.PixelHeight; y++)

    {

        for (int x = 0; x < grayscale.PixelWidth; x++)

        {

            //here’s the pixel we’re centered on

            int pixel = x + y * grayscale.PixelWidth;

            byte intensity = (byte)grayscale.Pixels[pixel];

 

            // if we are on an edge we are going to leave it

            // as the original intensity.  you will see the

            // edges increasingly unsmoothed as the window

            // size increases.  here we are using the radius

            // to determine our bounds

            if (y <= radius – 1 ||

                x <= radius – 1 ||

                y >= grayscale.PixelHeight – radius ||

                x >= grayscale.PixelWidth – radius)

            {

 

                histogram[intensity]++;

                if (histogram[intensity] > maxIntensity)

                {

                    maxIntensity = histogram[intensity];

                }

                continue;

            }

 

            /////////////////////////////////////////////////////////

            // IMPORTANT PART ///////////////////////////////////////

            /////////////////////////////////////////////////////////

            // this list is the key

            // it contains all of the neighboring pixels

            List<byte> localIntensities = new List<byte>();

            for (int yoffset = -radius; yoffset <= radius; yoffset++)

            {

                for (int xoffset = -radius;

                    xoffset <= radius;

                    xoffset++)

                {

                    localIntensities.Add((

                        (byte)grayscale.Pixels[(x + xoffset)

                        + (y + yoffset) * grayscale.PixelWidth]));

                }

            }

            //sort the intensities

            localIntensities.Sort();

            //pick the middle value

            int medianLocalIntensity =

             localIntensities[(int)(localIntensities.Count/2.0+.5)];

            /////////////////////////////////////////////////////////

            // END IMPORTANT PART ///////////////////////////////////

            /////////////////////////////////////////////////////////

                   

            // and now just set the color

            filtered.Pixels[pixel] = (255 << 24)

                | (byte)medianLocalIntensity << 16

                | (byte)medianLocalIntensity << 8

                | (byte)medianLocalIntensity;

 

            histogram[(byte)medianLocalIntensity]++;

            if (histogram[(byte)medianLocalIntensity] > maxIntensity)

            {

                maxIntensity = histogram[(byte)medianLocalIntensity];

            }

        }

    }

 

    PlotHistogram(histogram, maxIntensity);

    return filtered;

}

 

Results

Taking a simple gray image I added salt and pepper noise to the image then performed a median filter and an average filter to it.  The results are stunning.

imageimage

                flat gray image                              10% salt and pepper noise

imageimage

           after median filtering                      after mean filtering

There are a few specks after applying the median filter, but the noise is removed pretty well.  The average filter performed dismally to say the list.

Summary

If you expect salt and pepper noise then the median filter is great tool to have in your toolbox.  If you want you can explore max, min, and mode filters.  I don’t think we’ll cover them here unless we have a specific application for it.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2015%20-%20Median%20Filter.zip

(code includes salt and pepper noise generation)

Up Next: Binary Images

Feel free to leave a comment or follow @azzlsoft on Twitter if you have any questions.





Phone Vision 14 – Sobel Operators

25 02 2011

Last time we explored how we can use the rate of change between adjacent pixels to enhance a boundary between two features in an image.  Today we are going to take that one step further and isolate edges using the Sobel operators.  Hold on tight…

The Sobel Operators

A search for the Sobel operators will quickly turn up these two filters:

image

image

 

 

 

 

 

 

 

I created a rather colorful graphic to demonstrate some of the concepts.

image

 

imageimageimage

What do you notice?  Here’s what I notice:

  • All of the non-edges turn black. 
  • Diagonal edges show up in both filters. 
  • The first filter misses horizontal edges
  • The second filter misses vertical edges.

The union of these two images will give you a more complete view of the edges in the system.  If that’s all you wanted to know then you’re done and you can stop here.

I’m curious how they work so I’m going to dig deeper.

Gradient

When you are driving down a mountain you might see a sign (like the one to the right) indicating a steep grade.  The grade in the context of a mathematical function is no different.  We are trying to find the slope of the function at a given point.  This is called the gradient.

Last time we looked at how fast a function was changing by subtracting adjacent intensities.    If you think about it, it’s easy to imagine areas with a lot of change having a steep slope, right?  So have we already found the gradient?  Yes and no.  Hold on, this is going to get rough.

Subtracting the adjacent intensities, we determined the slope from one pixel to the next, but we didn’t determine the slope at the pixel.  Because we are dealing with discrete values (i.e. there is no ‘in-between’ pixels) the best we can hope for is an approximation of the slope at a pixel.  I am hoping that an example will clear this up a bit. 

Imagine the sin function represents some hilly terrain you are driving over.

image

It should be fairly apparent that at the top of the hill our grade is flat (slope = 0).  Let’s imagine we have a GPS recording our position every so often so we get an approximation of the terrain.

image

Using the same technique as the last lesson, let’s try to approximate the slope at the top of the hill.  Last time we said the change was f(x+1) – f(x).

image

Well, that’s not very good.  It turns out that if we adjust our formula ever so slightly we get much better results.  What if we use the position right before we got to the top of the hill instead of the position at the top?

image

That is a much better approximation,but what now?

Gradient Edge Detection

Let’s convert our new difference function to our filter representation. 

Remember,

  • f(x+1) = next position (right after the top)
  • f(x) = current position (at the top)
  • f(x-1) = previous position (right before the top)

so our gradient function looks like:

-1*f(x-1) + 0* f(x) + 1*f(x+1)

or

image

Following the same process in the y direction we end up with

-1*f(y-1) + 0* f(y) +1*f(y+1)

or

image

If we apply these we end up with:

imageimageimage

It’s a much weaker signal, but I can still spot the edges. 

If these filters make sense to you then you should be able to answer the following questions:

  • Why are the horizontal lines still missing in the first image?
  • And the vertical lines still missing in the second image? 
  • Why do the diagonals show up?

Separable Filters

You might be thinking, “um, I guess this is nice, but what does it have to do with the Sobel?”  Everything.

We don’t have the tools yet to completely analyze this, but the Sobel operator is a separable filter.  This means that it is a combination of two 1D filters.  Sobel is a smooth filter multiplied with the gradient filter we just built.

A couple of notes:

  • [1,2,1] is a 1D weighted smoothing filter
  • This is matrix multiplication.  If you need a refresher, check out the Kahn Academy video on the subject.  I love that place.

 

image

and

image

Whoa!  Mind blown.

Summary

We are starting to add some useful tools to our toolbox, but we are dancing around some really complex concepts.  Over the next few lessons we will continue working with a few more simple blur, sharpen, and edge detection routines before we jump head first into frequency analysis.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2014%20-%20Sobel%20Operator.zip

Up Next: Median Filter

Feel free to leave a comment or follow @azzlsoft on Twitter if you have any questions.





Phone Vision – 13 Sharpening Filters

21 02 2011

Smoothing an image was pretty intuitive.  When we are smoothing an image we are actually trying to suppress details in the image.  When we sharpen an image we are trying to do the opposite.  So, how do we do that?

Rates of Change

When we are sharpening an image we are trying to enhance the differences.  What are the differences?  Well, let’s take a look at a 1D strip of intensities:

image

If we take a cursory look at these pixels we’ll see that the intensities gradually decrease from left to right before jumping back up dramatically then dropping back down.  The dramatic ‘jumps’ are what will ultimately be enhanced when we sharpen the image.  Here they are highlighted.

image

We want to know how fast the intensities are changing.  If ∆ represents the rate of change between adjacent pixels then our rate of change looks like:

image

We just subtract the value of the next pixel from the value of the current pixel.  A little more formally, this looks like

∆(x) = f(x+1) – f(x)

Under normal conditions, some natural variation will occur pixel by pixel.  We want to ignore that natural variation and only concern ourselves when the change is drastic.  For that we will have to perform the subtraction one more time.

image

Now we can see how fast each intensity is changing.  Once again, let’s formalize this.

2(x) =∆(x+1) – ∆(x)

2(x) = (f(x+1) – f(x)) – (f(x) – f(x-1)

2(x) = f(x-1) – 2 * f(x) + f(x+1)

Implementing the 1D Case

Then the filter for our 1D case  looks like

image

If we apply this to our image we end up with this from above:

image

If we subtract this from our original image something magical happens:

image

We have sharpened the image!

In order to achieve that result, we subtracted ∆2  from the original image f.  Our new formula looks like:

sharp = f – ∆2

so in the x direction,

sharp(x) = f(x) – ∆2(x)

sharp(x) = f(x) – (f(x-1) – 2 * f(x) + f(x+1))

sharp(x) = f(x) + -f(x-1) + 2 * f(x) – f(x+1)

sharp(x) = –f(x-1) + 3*f(x) – f(x+1)

Hence, our final 1D filter:

image

Expanding to 2D

If all images were just a single strip of pixels we’d be done.  Unfortunately they aren’t, but accounting this is a pretty simple adjustment.  The first thing to notice is that the 1D case works in any direction.  If we wanted to apply the filter vertically instead of horizontally it would look like this:

image

So, how do we expand this to two dimensions?  It’s pretty easy, actually:

image

Now you have a sharpening filter that will work in two dimensions.  If you want to include the diagonals, there is no reason you can’t.  Using the exact same method the final sharpening filter becomes:

image

And we’re done!

Summary

With some luck, you will have a solid grasp of how the sharpening filter works.  If you are interested in researching this topic more, we walked through creating the Laplacian operator.  If you are comfortable with calculus, the difference functions we discussed were actually just the first derivative (∆) and the second derivative (∆2).

Having discussed the filter code at length, the code is not posted here.  The code from previous lessons will work fantastically.

Up Next: Sobel Operators





Phone Vision 12–Smoothing Filters

15 02 2011

What does it mean to smooth an image?  When I imagine a smooth surface it doesn’t have a lot of bumps or pits.  If we were to try to smooth out a rough piece of wood we would probably sand down the peaks and fill in holes.  If you apply that analogy to an image it sounds a lot like averaging a neighborhood.

Average Filter

If you remember from last time we described a neighborhood filter using a number of weights as follows:

image

A simple average would allow all of the pixels in the neighborhood to contribute equally to the average.  In that case would represent the filter as:

image

Using our code from last time, the filter looks like

int[,] filter = new int[,]

{

    { 1, 1, 1 },

    { 1, 1, 1 },

    { 1, 1, 1 }

};

And here are the results when we run it:

image

image

The original is on the left and the smoothed version is on the right.  Notice how the large objects are relatively unchanged, but the small objects have lost a lot of information.  In addition to removing noise, if we are trying to segment large objects, we can use smoothing to blur out the small objects.

Modified Filter Code

If we run this filter enough times we will eventually get every pixel to be some sort of gray.  It would be more efficient just to increase the size of the window.  Let’s modify the filtering code to accept a square filter of arbitrary size.

private WriteableBitmap Filter(WriteableBitmap grayscale, int[,] filter)

{

    // we are still going to create a new image

    // because we don’t want to modify the

    // old image as we are processing it

    WriteableBitmap filtered =

        new WriteableBitmap(

            grayscale.PixelWidth,

            grayscale.PixelHeight);

 

    // boiler plate code for our

    // histogram stuff

    int[] histogram = new int[256];

    int maxIntensity = 0;

 

    //here’s where the magic starts

    //assume the filter is square

    int length = (int)Math.Sqrt(filter.Length);

    int filterMagnitude = 0;

    for (int x = 0; x < length; x++)

    {

        for (int y = 0; y < length; y++)

        {

            filterMagnitude += filter[x, y];

        }

    }

 

    // this will be used for our loops

    // it’s important that our size is odd

    // so that our current pixel can be the

    // center point.

    int radius = length / 2;

 

    // the math is still easier if we create two loops

    // instead of one

    for (int y = 0; y < grayscale.PixelHeight; y++)

    {

        for (int x = 0; x < grayscale.PixelWidth; x++)

        {

            //here’s the pixel we’re centered on

            int pixel = x + y * grayscale.PixelWidth;

            byte intensity = (byte)grayscale.Pixels[pixel];

 

            // if we are on an edge we are going to leave it

            // as the original intensity.  you will see the

            // edges increasingly unsmoothed as the window

            // size increases.  here we are using the radius

            // to determine our bounds

            if (y <= radius – 1 ||

                x <= radius – 1 ||

                y >= grayscale.PixelHeight – radius ||

                x >= grayscale.PixelWidth – radius)

            {

                //maintain the original / non-smoothed

                filtered.Pixels[pixel] = (255 << 24)

                    | (byte)intensity << 16

                    | (byte)intensity << 8

                    | (byte)intensity;

 

                histogram[intensity]++;

                if (histogram[intensity] > maxIntensity)

                {

                    maxIntensity = histogram[intensity];

                }

                continue;

            }

 

            int newIntensity = 0;

            //going from -radius to radius makes the

     //math easier here too

            for (int yoffset = -radius; yoffset <= radius; yoffset++)

            {

                for (int xoffset = -radius;

                    xoffset <= radius;

                    xoffset++)

                {

                    // we loop through each pixel in the neighborhood

                    // and apply the filter. by ‘apply the filter’

                    // I mean multiply it by the appropriate weight

                    newIntensity +=

                        ((byte)grayscale.Pixels

                            [(x + xoffset)

                            + (y + yoffset) * grayscale.PixelWidth])

                        * filter[(yoffset + radius),

                        (xoffset + radius)];

                }

            }

 

            // scale the new intensity back

            newIntensity /= filterMagnitude;

            newIntensity =

                Math.Max(Math.Min(newIntensity, 255), 0);

 

            // and now just set the color

            filtered.Pixels[pixel] = (255 << 24)

                | (byte)newIntensity << 16

                | (byte)newIntensity << 8

                | (byte)newIntensity;

 

            histogram[(byte)newIntensity]++;

            if (histogram[(byte)newIntensity] > maxIntensity)

            {

                maxIntensity = histogram[(byte)newIntensity];

            }

        }

    }

 

    PlotHistogram(histogram, maxIntensity);

    return filtered;

}

 

Now we can really smooth the image.

A Large Filter

A 21×21 filter is about as large as I can do without having the mask wrap to the next line.

int[,] filter = new int[,]

{

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

    { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 },

};

 

image

We coded it so that the edges are unfiltered.  To work around this issue you can perform your average with the pixels you have available.  Notice here that the large objects are still relatively unchanged, while the smaller objects have lost a substantial amount of information.

Summary

This technique is pretty easy.  More important however, is that the masks can be changed easily and the effects can be seen instantly.  Once again I recommend that you play around with the masks on your own.  As we’ll see soon they can be used for things far more powerful than blurring an image.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2012%20-%20Smoothing%20Filters.zip

Up next: Sharpening Filters