Phone Vision 16 – Binary Images

9 03 2011

When we are creating a binary image we are defining objects of interest.  Defining these objects is very specific to the task at hand.  In some cases a simple threshold will work while others require a more complex approach such as color segmentation, edge detection, or something even more sinister.

Thresholding

The most common way to create a binary image from a grayscale image is to pick an intensity threshold (also known as gray-level segmentation).  Everything below the threshold becomes a pixel of interest (black).  Everything else is not (white).  Let’s explore a few methods for picking that threshold.

The Image

Interestingly, this picture is a white piece of poster board (not blue) with some dark puzzle pieces on it.  If you look closely you can see the famous HD7 pink tint.  The image on the right is simply converted to grayscale using the average of the RGB values.

imageimage

Mean Threshold

Almost every time an image processing problem arises and a simple average is a possible answer my gut tells me to go for it.  In this case, my gut is going to lead me astray, but first some code:

WriteableBitmap ToAverageBinary(WriteableBitmap grayscale)

{

    WriteableBitmap binary =

        new WriteableBitmap(grayscale.PixelWidth,

            grayscale.PixelHeight);

 

 

    int[] histogramData = new int[256];

    int maxCount = 0;

 

    //first we will determine the histogram

    //for the grayscale image

    for (int pixelIndex = 0;

        pixelIndex < grayscale.Pixels.Length;

        pixelIndex++)

    {

        byte intensity = (byte)grayscale.Pixels[pixelIndex];

 

        //simply add another to the count

        //for that intensity

        histogramData[intensity]++;

 

        if (histogramData[intensity] > maxCount)

        {

            maxCount = histogramData[intensity];

        }

    }

 

    //now we need to figure out the average intensity

    long average = 0;

    for (int intensity = 0; intensity < 256; intensity++)

    {

        average += intensity * histogramData[intensity];

    }

 

    //this is our threshold

    average /= grayscale.Pixels.Length;

 

    for (int pixelIndex = 0;

        pixelIndex < grayscale.Pixels.Length;

        pixelIndex++)

    {

        byte intensity = (byte)grayscale.Pixels[pixelIndex];

 

        //now we’re going to set the pixels

        //greater than or equal to the average

        //to white and everything else to black

        if (intensity >= average)

        {

            intensity = 255;

            unchecked { binary.Pixels[pixelIndex] = (int)0xFFFFFFFF; }

        }

        else

        {

            intensity = 0;

            unchecked { binary.Pixels[pixelIndex] = (int)0xFF000000; }

        }

    }

 

    //note that this is the ORIGINAL histogram

    //not the histogram for the binary image

    PlotHistogram(histogramData, maxCount);

 

    //this is a line to show where the

    //average is relative to the rest of

    //the histogram

    Line averageLine = new Line();

    averageLine.X1 = HistogramPlot.Width * average / 255;

    averageLine.X2 = HistogramPlot.Width * average / 255;

    averageLine.Y1 = 0;

    averageLine.Y2 = HistogramPlot.Height;

    averageLine.Stroke = new SolidColorBrush(Colors.Magenta);

    averageLine.StrokeThickness = 2;

    averageLine.StrokeDashArray = new DoubleCollection() { 5, 2.5 };

    HistogramPlot.Children.Add(averageLine);

    return binary;

}

image

 

While the puzzle pieces came through loud and clear so did a lot of garbage.  Notice how the average line (magenta) splits the peak?  This is going to lead to a large number of false black pixels.  What we want is an automated technique for shifting the threshold left.

 

Two Peak

 

If you look at the original grayscale image you’ll notice that the puzzle pieces seem to be close to the same intensity while the background is another. This is represented in the histogram by the large peak (the background) and an almost imperceptible peak toward the dark end of the spectrum (the puzzle pieces).

 

Finding the large peak is trivial.  It’s just the highest occurring intensity.  The small peak on the other hand is a little trickier.  It’s not the second most frequent intensity – that’s right next to the largest peak.  A little trick is to give a higher weight to intensities that are far from the highest peak.  To do this we multiply the intensity count at each point by the square of its distance to the peak.  That’s a mouthful, but it’s pretty easy.  Here’s the code:

 

int secondPeak = 0;

long secondPeakCount = 0;

for (int intensity = 0; intensity < 256; intensity++)

{

    long adjustedPeakCount =

        (long)(histogramData[intensity]*Math.Pow(intensity-maxPeak, 2));

    if (adjustedPeakCount > secondPeakCount)

    {

        secondPeak = intensity;

        secondPeakCount = adjustedPeakCount;

    }

}

So we calculate an adjusted count for each intensity.  By multiplying it by the square of the distance we give higher counts to those further from the first peak.  Amazingly, this works even in this case where the second peak is so small.

image

Wow!  Those results are so much better.  Notice I marked the two peaks with green and the new threshold is in magenta.

Summary

Don’t use the mean.  You might get lucky in a few cases, but in general you’ll end up in a messy situation.  That said, the two-peak solution will work well in cases where your objects are of a consistent color and that color is separate from the background.  I created this photo because I knew it would have good results.  I highly recommend you try these techniques out on your own photos so you can get a feel for when they will work and when they won’t.

We will most likely revisit different thresholding techniques for different situations when they arise, but for now we have a binary image so we’re going to see what we can do with it.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2016%20-%20Binary%20Images.zip

Up Next: Sets





Phone Vision 15 – Median Filters

1 03 2011

Filters, filters, filters… Tired of them yet?  Me too.  After this we’ll leave them alone for a while.  I promise.  Until then, we have work to do.

What is a median filter?  If you understand the average filter (or mean filter) then the median filter should give you no troubles at all – assuming you know what the median is.  We’ll start there.

What is the Median?

The median is the middle number of a sample.  To find it, you sort your list and pick the middle item.  Yes, it really is that simple.

Here is a random set of 9 numbers:

image

Sort them:

image

Pick the middle value:

image

Like I said, it’s simple.

Why use median instead of mean?

The median is particularly effective against certain types of noise – specifically “salt and pepper” noise (black and white specks).  Let’s assume this is the neighborhood we’re dealing with:

image

Performing the mean gives:

image

image

While the median returns:

image

image

If we expect that the 0 values are noise then the median is probably a much closer estimate.

Code

private WriteableBitmap MedianFilter(WriteableBitmap grayscale, int radius)

{

    // we are still going to create a new image

    // because we don’t want to modify the

    // old image as we are processing it

    WriteableBitmap filtered =

        new WriteableBitmap(

            grayscale.PixelWidth,

            grayscale.PixelHeight);

 

    // boiler plate code for our

    // histogram stuff

    int[] histogram = new int[256];

    int maxIntensity = 0;

 

    // the math is still easier if we create two loops

    // instead of one

    for (int y = 0; y < grayscale.PixelHeight; y++)

    {

        for (int x = 0; x < grayscale.PixelWidth; x++)

        {

            //here’s the pixel we’re centered on

            int pixel = x + y * grayscale.PixelWidth;

            byte intensity = (byte)grayscale.Pixels[pixel];

 

            // if we are on an edge we are going to leave it

            // as the original intensity.  you will see the

            // edges increasingly unsmoothed as the window

            // size increases.  here we are using the radius

            // to determine our bounds

            if (y <= radius – 1 ||

                x <= radius – 1 ||

                y >= grayscale.PixelHeight – radius ||

                x >= grayscale.PixelWidth – radius)

            {

 

                histogram[intensity]++;

                if (histogram[intensity] > maxIntensity)

                {

                    maxIntensity = histogram[intensity];

                }

                continue;

            }

 

            /////////////////////////////////////////////////////////

            // IMPORTANT PART ///////////////////////////////////////

            /////////////////////////////////////////////////////////

            // this list is the key

            // it contains all of the neighboring pixels

            List<byte> localIntensities = new List<byte>();

            for (int yoffset = -radius; yoffset <= radius; yoffset++)

            {

                for (int xoffset = -radius;

                    xoffset <= radius;

                    xoffset++)

                {

                    localIntensities.Add((

                        (byte)grayscale.Pixels[(x + xoffset)

                        + (y + yoffset) * grayscale.PixelWidth]));

                }

            }

            //sort the intensities

            localIntensities.Sort();

            //pick the middle value

            int medianLocalIntensity =

             localIntensities[(int)(localIntensities.Count/2.0+.5)];

            /////////////////////////////////////////////////////////

            // END IMPORTANT PART ///////////////////////////////////

            /////////////////////////////////////////////////////////

                   

            // and now just set the color

            filtered.Pixels[pixel] = (255 << 24)

                | (byte)medianLocalIntensity << 16

                | (byte)medianLocalIntensity << 8

                | (byte)medianLocalIntensity;

 

            histogram[(byte)medianLocalIntensity]++;

            if (histogram[(byte)medianLocalIntensity] > maxIntensity)

            {

                maxIntensity = histogram[(byte)medianLocalIntensity];

            }

        }

    }

 

    PlotHistogram(histogram, maxIntensity);

    return filtered;

}

 

Results

Taking a simple gray image I added salt and pepper noise to the image then performed a median filter and an average filter to it.  The results are stunning.

imageimage

                flat gray image                              10% salt and pepper noise

imageimage

           after median filtering                      after mean filtering

There are a few specks after applying the median filter, but the noise is removed pretty well.  The average filter performed dismally to say the list.

Summary

If you expect salt and pepper noise then the median filter is great tool to have in your toolbox.  If you want you can explore max, min, and mode filters.  I don’t think we’ll cover them here unless we have a specific application for it.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2015%20-%20Median%20Filter.zip

(code includes salt and pepper noise generation)

Up Next: Binary Images

Feel free to leave a comment or follow @azzlsoft on Twitter if you have any questions.





Phone Vision 14 – Sobel Operators

25 02 2011

Last time we explored how we can use the rate of change between adjacent pixels to enhance a boundary between two features in an image.  Today we are going to take that one step further and isolate edges using the Sobel operators.  Hold on tight…

The Sobel Operators

A search for the Sobel operators will quickly turn up these two filters:

image

image

 

 

 

 

 

 

 

I created a rather colorful graphic to demonstrate some of the concepts.

image

 

imageimageimage

What do you notice?  Here’s what I notice:

  • All of the non-edges turn black. 
  • Diagonal edges show up in both filters. 
  • The first filter misses horizontal edges
  • The second filter misses vertical edges.

The union of these two images will give you a more complete view of the edges in the system.  If that’s all you wanted to know then you’re done and you can stop here.

I’m curious how they work so I’m going to dig deeper.

Gradient

When you are driving down a mountain you might see a sign (like the one to the right) indicating a steep grade.  The grade in the context of a mathematical function is no different.  We are trying to find the slope of the function at a given point.  This is called the gradient.

Last time we looked at how fast a function was changing by subtracting adjacent intensities.    If you think about it, it’s easy to imagine areas with a lot of change having a steep slope, right?  So have we already found the gradient?  Yes and no.  Hold on, this is going to get rough.

Subtracting the adjacent intensities, we determined the slope from one pixel to the next, but we didn’t determine the slope at the pixel.  Because we are dealing with discrete values (i.e. there is no ‘in-between’ pixels) the best we can hope for is an approximation of the slope at a pixel.  I am hoping that an example will clear this up a bit. 

Imagine the sin function represents some hilly terrain you are driving over.

image

It should be fairly apparent that at the top of the hill our grade is flat (slope = 0).  Let’s imagine we have a GPS recording our position every so often so we get an approximation of the terrain.

image

Using the same technique as the last lesson, let’s try to approximate the slope at the top of the hill.  Last time we said the change was f(x+1) – f(x).

image

Well, that’s not very good.  It turns out that if we adjust our formula ever so slightly we get much better results.  What if we use the position right before we got to the top of the hill instead of the position at the top?

image

That is a much better approximation,but what now?

Gradient Edge Detection

Let’s convert our new difference function to our filter representation. 

Remember,

  • f(x+1) = next position (right after the top)
  • f(x) = current position (at the top)
  • f(x-1) = previous position (right before the top)

so our gradient function looks like:

-1*f(x-1) + 0* f(x) + 1*f(x+1)

or

image

Following the same process in the y direction we end up with

-1*f(y-1) + 0* f(y) +1*f(y+1)

or

image

If we apply these we end up with:

imageimageimage

It’s a much weaker signal, but I can still spot the edges. 

If these filters make sense to you then you should be able to answer the following questions:

  • Why are the horizontal lines still missing in the first image?
  • And the vertical lines still missing in the second image? 
  • Why do the diagonals show up?

Separable Filters

You might be thinking, “um, I guess this is nice, but what does it have to do with the Sobel?”  Everything.

We don’t have the tools yet to completely analyze this, but the Sobel operator is a separable filter.  This means that it is a combination of two 1D filters.  Sobel is a smooth filter multiplied with the gradient filter we just built.

A couple of notes:

  • [1,2,1] is a 1D weighted smoothing filter
  • This is matrix multiplication.  If you need a refresher, check out the Kahn Academy video on the subject.  I love that place.

 

image

and

image

Whoa!  Mind blown.

Summary

We are starting to add some useful tools to our toolbox, but we are dancing around some really complex concepts.  Over the next few lessons we will continue working with a few more simple blur, sharpen, and edge detection routines before we jump head first into frequency analysis.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2014%20-%20Sobel%20Operator.zip

Up Next: Median Filter

Feel free to leave a comment or follow @azzlsoft on Twitter if you have any questions.