Phone Vision 06 – RGB Color Intensities

21 01 2011

I enjoyed our last blog’s bit of history, but how can we make something more useful than a spinning top?  It turns out that the fundamental principle Maxwell described for color photography is still in use today.  Though the medium is different, for all intents and purposes we are still taking three pictures – one with a red filter, one with a green filter, and one with a blue filter.  Those images are then combined to form a color photograph. 

Color Intensity

When we take a picture, we are just measuring the intensity of light at a given point.  With a sensor alone we would only be capable of creating a map of intensities.  This is definitely useful in some situations, but it ignores oodles of information humans can decipher naturally – color.

Using a filter that only lets through a specific color, we  can measure the intensity of that color.  Remember, we need at least three distinct colors to create the gamut of colors our eyes can distinguish.  To capture three different colors we need three different filters.

In the tender, early years of color photography, three separate pictures had to be taken.  I can’t imagine it was easy to capture a moving target.  Some enterprising people came up with a way to split the light into the three different wavelengths and then aim those beams at three separate light sensitive plates.  Clever as it may be, it can also be expensive and bulky.  There has to be a better way.

Bayer Filter

Conceptually, this next idea is very simple.  From a manufacturing standpoint, I’m not so certain.   Regardless, most modern digital cameras use a single image sensor covered by a grid of color filters arranged in what’s called a Bayer pattern (named after its inventor Bryce Bayer).

BayerFilter

The astute observer might notice that there are twice as many green filters as there are red or blue.  Wikipedia says this is because humans have a heck of a lot of rod cells (light sensing) and they are most sensitive to green light (~498 nm if you care).

In its raw form, the output of the camera above could be thought of as 3 separate images with with missing information:

red_mosaic_question

green_mosaic_questionblue_mosaic_question

Converting this raw data into a human friendly image is a process called demosaicing.  There are lots and lots and lots of ways to accomplish this.  They range from the very simple – like treating a 4×4 group as one pixel by averaging the green values – to a little more complex – like filling in the gaps by averaging neighboring pixels – to way beyond the scope of this blog.

Artificial Bayer Filter

The images we deal with are already demosaiced.  For this lesson we want to mess around with a raw image, but we don’t have one.  With that in mind, we are just going to have to “de-demosaic” an image.  For the color [255,128,64], the “de-demosaic” transform looks something like this:

pixel-255-128-64_to_bayer

 

Notice that our width will be twice the original width and our height will be twice the original height.  This will quadruple the size of the image.  Another important piece is the original coordinate (x,y) becomes 4 separate coordinates in the new image:

image

Let’s see what we can do.

//This assumes you already have the image in

//WriteableBitmap form. Refer to earlier lessons

//if you’re not sure how to do this.

WriteableBitmap ToBayerRaw(WriteableBitmap bmp)

{

    //we need to double the size of the target bitmap

    WriteableBitmap bayerRaw =

        new WriteableBitmap(bmp.PixelWidth * 2, bmp.PixelHeight * 2);

           

    //we loop through every column, row by row.

    //the two loops are not necessary, but

    //they demonstrate the concept easier

    //than a single loop

    for (int y = 0; y < bmp.PixelHeight; y++)

    {

        for(int x = 0; x < bmp.PixelWidth; x++)

        {

            //first recover the RGB pixel data

            int pixel = bmp.Pixels[x + y * bmp.PixelWidth];

                   

            // remember that (x,y) translates to 4 coordinates

            // g1 = (2x, 2y)

            // b = (2x+1, 2y)

            // r = (2x, 2y+1)

            // g2 = (2x+1, 2y+1)

            // also note that we are using the new pixel width

            // as our offset

            // notice the masks in use as well

 

            //upper left green

            bayerRaw.Pixels

                [2*x + 2 * y * (bayerRaw.PixelWidth)]

                = (int)(pixel & 0xFF00FF00);

                   

            //upper right blue

            bayerRaw.Pixels

                [2 * x + 1 + 2 * y * (bayerRaw.PixelWidth)]

                = (int)(pixel & 0xFF0000FF);

                   

            //lower left red

            bayerRaw.Pixels

                [2*x + (2 * y + 1) * (bayerRaw.PixelWidth)]

                = (int)(pixel & 0xFFFF0000);

 

            //lower right green

            bayerRaw.Pixels

                [2*x + 1 + (2 * y + 1) * (bayerRaw.PixelWidth)]

                = (int)(pixel & 0xFF00FF00);

        }

    }

    return bayerRaw;

}

 

image

image

Since half of the pixels are green and only a quarter are red and a quarter are blue, we would expect a green tint.  Matrixesque, am I right?

Interpolation

So now we have a raw file to work with, let’s see if we can fill in some of the “gaps” we’ve created.  There are quite a few techniques to accomplish this, but we are going to ignore them and try to figure them out on our own.   Let’s start with the red pixels:

red_mosaic_question

The first thing I notice is that some of ? pixels have immediate neighbors with values so let’s tackle those first.  My gut says that averaging them would yield a decent approximation:

red_average_01

This gives us value for 8 of the missing pixels leaving 4 behind:

image

If we ran through the image one more time, we could fill in the remaining gaps.  On the second run the new top, bottom, left, and right values would determine the new value of the missing pixels:

image

Wait a minute… Top, bottom, left and right were also the average of the original pixels.  Can we use that?  Let’s label a set of original pixels w, x, y, and z.

image

That means that top, bottom, left, and right are defined as:

image

image

image

image

This simplifies rather nicely (I left out a couple steps):

image

What I like to call the average of the original four pixels.  This is intuitive for a lot of us, but it’s nice to have some math to back it up.

Putting it all Together

WriteableBitmap Demosaic(WriteableBitmap bmp)

{

    WriteableBitmap interpolatedBmp =

        new WriteableBitmap(bmp.PixelWidth, bmp.PixelHeight);

    // we are going to cheat and ignore the boundaries

    // dealing with the boundaries is not difficult,

    // but the code is long enough as it is

    for (int y = 1; y < bmp.PixelHeight – 1; y++)

    {

        for (int x = 1; x < bmp.PixelWidth – 1; x++)

        {

            //first we are going to recover the pixel neighborhood.

            //the mask at the end is simply to get rid of the alpha

            //channel

            //middlecenter is the pixel we are working with (x,y)

            int topleft =

                bmp.Pixels[x – 1 + (y – 1) * bmp.PixelWidth] & 0xFFFFFF;

            int topcenter =

                bmp.Pixels[x + (y – 1) * bmp.PixelWidth] & 0xFFFFFF;

            int topright =

                bmp.Pixels[x + 1 + (y – 1) * bmp.PixelWidth] & 0xFFFFFF;

 

            int middleleft =

                bmp.Pixels[x – 1 + y * bmp.PixelWidth] & 0xFFFFFF;

            int middlecenter =

                bmp.Pixels[x + y * bmp.PixelWidth] & 0xFFFFFF;

            int middleright =

                bmp.Pixels[x + 1 + y * bmp.PixelWidth] & 0xFFFFFF;

 

            int bottomleft =

                bmp.Pixels[x – 1 + (y + 1) * bmp.PixelWidth] & 0xFFFFFF;

            int bottomcenter =

                bmp.Pixels[x + (y + 1) * bmp.PixelWidth] & 0xFFFFFF;

            int bottomright =

                bmp.Pixels[x + 1 + (y + 1) * bmp.PixelWidth] & 0xFFFFFF;

 

            int blue = 0;

            int red = 0;

            int green = 0;

 

            // if we are on an even row and an even column

            // like (2, 2) then we are on a green pixel

            // we need to average top center and bottom

            // center for red we need to average left

            // middle and right middle for blue

            if (y % 2 == 0 && x % 2 == 0)

            {

                red = (((topcenter >> 16) +

                    (bottomcenter >> 16)) / 2) << 16;

                blue = (middleleft + middleright) / 2;

                green = middlecenter;

            }

 

            // if we are on an even row and an odd column

            // like (2, 1) then we are on a blue pixel

            // red is an average of top left, top right,

            // bottom left, and bottom right

            // green is top center, bottom center,

            // left middle, and right middle

            else if (y % 2 == 0 && x % 2 == 1)

            {

                red = (((topleft >> 16) +

                    (topright >> 16) +

                    (bottomleft >> 16) +

                    (bottomright >> 16)) / 4) << 16;

                blue = middlecenter;

                green = (((topcenter >> 8 ) +

                    (bottomcenter >> 8 ) +

                    (middleleft >> 8 ) +

                    (middleright >> 8 )) / 4) << 8;

            }

 

            // if we are on an odd row and an even column

            // like (1, 2) then we are on a red pixel

            // blue is an average of top left, top right,

            // bottom left, and bottom right

            // green is top center, bottom center,

            // left middle, and right middle

            else if (y % 2 == 1 && x % 2 == 0)

            {

                red = middlecenter;

                blue = (topleft +

                    topright +

                    bottomleft +

                    bottomright) / 4;

                green = (((topcenter >> 8 ) +

                    (bottomcenter >> 8 ) +

                    (middleleft >> 8 ) +

                    (middleright >> 8 )) / 4) << 8;

            }

 

            // otherwise we are on an odd row and odd column

            // like (1,1). this is a green pixel

            // red left middle + right middle

            // blue is top center + bottom center

            else

            {

                red = (((middleleft >> 16) +

                    (middleright >> 16)) / 2) << 16;

                blue = (topcenter + bottomcenter) / 2;

                green = middlecenter;

            }

 

            interpolatedBmp.Pixels[x + y * interpolatedBmp.PixelWidth]

                = (255 << 24) | red | green | blue;

        }

    }

    return interpolatedBmp;

}

 

With a little luck, our image should look pretty close to the original.

image

image

image

On the left you’ll find the original followed by the Bayer filter in the middle and, finally the demosaiced is on the right.  I think we did alright, eh?  This image is large, much larger than the viewing space.  If, however, you were to perform the same procedure on an image that is smaller than the viewing space, you would most likely see artifacts from the resizing.

Summary

Hopefully you learned a little about how cameras actually work and maybe a little math.  It wasn’t too hard, but trying to keep all those colors straight can make the code messy.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2006%20-%20RGBColorIntensities.zip

Up next: Grayscale

follow @azzlsoft





Phone Vision 05 – A Brief History of Color

18 01 2011

You should have a pretty solid grasp of how to effectively manipulate pixels with images.  I guess now we should talk a little bit about color.  Why, after all, do we pick red, green, and blue as our primary colors from which to build things?

 

History

  • 1666 – Isaac Newton demonstrates white light is a combination of colors.
  • 1802 – Thomas Young postulates the existence of three types of photoreceptors in human eyes.
  • 1849 – James David Forbes experiments with rapidly spinning tops creating the illusion of a single color out of a mixture.
  • 1850 – Herman von Helmholtz classifies human color receptors as short, medium, and long corresponding to the wavelengths they are sensitive to.
  • 1861 – James Clerk Maxwell demonstrates a color photographic process using red, green, and blue light.

It turns out that the long, medium, and short photoreceptors (cones) in our eyes roughly correspond to red, green, and blue, respectively.  When emitting light, these three colors can be combined in various ways to produce all of the colors most humans are capable of seeing.  There are several color spaces we can use to generate human-friendly colors, but today we will focus on RGB.

 

The Forbes Top
(I made this name up)

Clearly, techniques for capturing and displaying color have been refined over the last few centuries.   We now enjoy color televisions, LCD screens, web cams, and so-called ‘retina displays’ to name a few.  Today, we are going to take a step back and try to simulate the rapidly spinning tops that James Forbes first experimented with.  The results will be somewhat crude, but they should demonstrate the concepts.

The most important component of this application is actually the design of the colorized circle that will represent our top.  I tried several different approaches before settling on 36 different slices (12 per color) with the colors alternating.  Coupling this with an adjustable rotation, an illusion of color can be established.

Here is the final product with red = 0, blue = 255, and green = 128:

image

 

My attempts at a screen capture video failed.  You’ll just have to download the code and try it yourself.  The only tricky component is actually drawing the wheel so I will walk through that piece.  As always, the rest of the code will be downloadable below.

 

Draw a Pie

private void DrawPie()

{

    //drawing board is just a canvas

    //let’s start with a clean slate

    DrawingBoard.Children.Clear();

 

 

    //red, green, and blue are member

    //variables adjusted with sliders

    double colorsum = red + green + blue;

    // a circle has 360 degrees or 2*Pi radians

    // we need to divide that into the different color

    // components.

    double redAngle = 2* Math.PI * (red / colorsum);

    double greenAngle = 2* Math.PI * (green / colorsum);

    double blueAngle = 2* Math.PI * (blue / colorsum);

 

    // this part is tricky

    // I am essentially doing a gamma function

    // to try to keep the intensities close

    // I eyeballed it so it’s not 100% accurate

    byte intensity =

        (byte)Math.Min(

        Math.Pow((red * .33

            + green * .34

            + blue * .33), 1.2), 255);

 

 

    Point start = new Point(0, 100);

    Random random = new Random();

    // we are going to divide our top

    // into 36 slices — 12 per color

    const int numSlices = 36;

    for (int i = 0; i < numSlices; i++)

    {

        SolidColorBrush brush = new SolidColorBrush(Colors.Black);

        double angle = 0;

        switch(i % 3)

        {

            case 0: //red

                angle = redAngle / (numSlices / 3);

                brush = new SolidColorBrush(

                    Color.FromArgb(255, intensity, 0, 0));

                break;

            case 1: //green

                angle = greenAngle / (numSlices / 3);

                brush = new SolidColorBrush(

                    Color.FromArgb(255, 0, intensity, 0));

                break;

            case 2: //blue

                angle = blueAngle / (numSlices / 3);

                brush = new SolidColorBrush(

                    Color.FromArgb(255, 0, 0, intensity));

                break;

        }

 

 

        // this code was essentially lifted from the Silverlight forums

        // http://forums.silverlight.net/forums/t/9013.aspx

        // Yi-Lun Luo posted it as a method for drawing pie graphs

        // I modified it for the color top

        // I will warn you that if your angle is > 90 degrees

        // you will most likely run into trouble

        Path path = new Path();

        PathGeometry geometry = new PathGeometry();

        PathFigure figure = new PathFigure();

        // The center point is 0,0.

        figure.StartPoint = new Point(0, 0);

        LineSegment line1 = new LineSegment();

        // Draw a line from the center of the circle to the start

        // point of the sector’s arc.

        line1.Point = start;

        figure.Segments = new PathSegmentCollection();

        figure.Segments.Add(line1);

        ArcSegment arc = new ArcSegment();

        arc.Size = new Size(100, 100);

        arc.SweepDirection = SweepDirection.Clockwise;

        Point p = start;

        //Perform a rotate on the start point of the arc

        // to compute the end point.

        p.X = start.X * Math.Cos(angle) – start.Y * Math.Sin(angle);

        p.Y = start.X * Math.Sin(angle) + start.Y * Math.Cos(angle);

        //The next sector’s arc will begin from this one’s end point.

        start = p;

        arc.Point = p;

        figure.Segments.Add(arc);

        LineSegment line2 = new LineSegment();

        // Draw another line from the sector’s end point

        // to the center of the circle.

        line2.Point = new Point(0, 0);

        figure.Segments.Add(line2);

        geometry.Figures = new PathFigureCollection();

        geometry.Figures.Add(figure);

        path.Data = geometry;

        // Assign a random color to this sector.

        path.Fill = brush;

        DrawingBoard.Children.Add(path);

        // Since our circle ranges from -100,-100 to 100,100,

        // we’ll set the Path’s Left and Top to show all the graph.

        path.SetValue(Canvas.LeftProperty, 100.0);

        path.SetValue(Canvas.TopProperty, 100.0);

    }

}

 

 

Summary

 

The most useful piece from this lesson might be the code for drawing a pie graph, but a bit of history is always fun.  The bottom line is that RGB comes from biology.  Our eyes are sensitive to those colors, so if we use them to mix we should be able to create images that mimic real life.

 

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2005%20-%20HistoryOfColor.zip

 

Up Next: RGB Color Intensities

 





Phone Vision 04 – Premultiplied Alpha

13 01 2011

Previously we explored encoding pixels from color components.  This works perfectly if our alpha value is 255, and in most cases it will be.  But, what if we need transparency?

 

Pixel Format (again)

From MSDN:

WriteableBitmap Pixel Format in Silverlight

When assigning colors to pixels in your bitmap, use pre-multiplied colors. The format used by the Silverlight WriteableBitmap is ARGB32 (premultiplied RGB). The format becomes relevant if you are populating the integer values in the Pixels array.

The initial pixel values in the dimensioned array are 0, which will render as black if left unaltered.

I want to clarify one thing.  The documentation says the pixel values are set to 0 by default (which is correct) and that it will render as black if left unaltered (which is incorrect).  It will render as transparent because the alpha values are also 0.  Don’t believe me?  Try it.

 

Transparency

Why would we ever want transparency when we are processing images?  Perhaps we want to highlight a point of interest.  You could leave the original image intact and just place a semi-transparent image over the top of it and let Silverlight blend them for you.  Regardless of the need, here’s how you do it.

If we were to create an image with a blue background covered with a semi-transparent (say, alpha = 128) white square, this is the output we would expect:

image_thumb_thumb

If you remember the MSDN documentation referenced in the last lesson it noted that the pixel format was premultiplied ARGB32.  This means that the RGB value has already been multiplied before storing the pixel.

The Code

Now we are going to try to recreate the blue color above using two different techniques.  The one will be straight RGB (i.e. not premultiplied) and the other will be premultiplied alpha.

1.) First, let’s set up a couple of blue rectangles to test.

<Grid x:Name="LayoutRoot"

          Background="White">

        <Grid.RowDefinitions>

            <RowDefinition />

            <RowDefinition />

        </Grid.RowDefinitions>

        <Rectangle

            Width="300"

            Height="300"

            Fill="Blue" />

               

        <Rectangle

            Grid.Row="1"

            Width="300"

            Height="300"

            Fill="Blue" />       

    </Grid>

 

Running this simple program should produce:

 

image

 

2.) Now we want to create a swatch for comparison.  Add a New Item, select Windows Phone User Control and call it “Swatch” (for comparing the techniques):

image

 

3.) Replace Swatch’s default <Grid> with this:

    <Canvas Width="100" Height="100">

        <Rectangle

            Fill="Blue"

            Width="100"

            Height="100" 

            Stroke="Black"

            StrokeThickness="2" />

        <Rectangle

            Fill="White"

            Opacity=".5"

            Width="100"

            Height="100"

            Stroke="Black"

            StrokeThickness="2" />

    </Canvas>

All we are doing is overlaying a blue rectangle with a semi-transparent (Opacity=.5) white rectangle.

4.) Add Swatch to the MainPage.xaml for comparison:

<Grid x:Name="LayoutRoot"

        Background="White">

    <Grid.RowDefinitions>

        <RowDefinition />

        <RowDefinition />

    </Grid.RowDefinitions>

    <Rectangle

        Width="300"

        Height="300"

        Fill="Blue" />       

      

    <Rectangle

        Grid.Row="1"

        Width="300"

        Height="300"

        Fill="Blue" />

       

    <local:Swatch x:Name="SwatchControl"

        Margin="190,0,190,-50"

        VerticalAlignment="Bottom" />

</Grid>

This output should look something like this:

image

 

4.) Add the two overlay images:

    <Grid x:Name="LayoutRoot"

          Background="White">

        <Grid.RowDefinitions>

            <RowDefinition />

            <RowDefinition />

        </Grid.RowDefinitions>

        <Rectangle

            Width="300"

            Height="300"

            Fill="Blue" />

       

        <Image x:Name="StraightRGB"

            Grid.Row="0"

            Width="300"

            Height="300" />

       

        <Rectangle

            Grid.Row="1"

            Width="300"

            Height="300"

            Fill="Blue" />

       

        <Image x:Name="PremultipliedAlpha"

            Grid.Row="1"

            Width="300"

            Height="300"/>

       <local:Swatch x:Name="SwatchControl"

            Margin="190,0,190,-50"

            VerticalAlignment="Bottom" />

    </Grid>

The output will be the same as above because we haven’t actually set the source for these images.

4.) Finally, the code behind:

private void AcquireImageButton_Click(object sender, EventArgs e)

{

    WriteableBitmap straightRGBImage =

        new WriteableBitmap(300, 300);

 

    WriteableBitmap premultipliedAlphaImage =

        new WriteableBitmap(300, 300);

 

    for (int pixelIndex = 0; pixelIndex < 90000; pixelIndex++ )

    {

        byte alpha = 128;

        byte red = 255;

        byte green = 255;

        byte blue = 255;

 

        double scaleAlpha = alpha / 255.0;

 

        // we are not using scaleAlpha here

        straightRGBImage.Pixels[pixelIndex] =

                (alpha << 24)

                | (red << 16)

                | (green << 8 )

                | blue;

 

        // notice the alpha value is NOT scaled

        // it’s also very important to scale BEFORE

        // shifting the values

        premultipliedAlphaImage.Pixels[pixelIndex] =

                (alpha << 24)

                | ((byte)(red * scaleAlpha) << 16)

                | ((byte)(green * scaleAlpha) << 8 )

                | (byte)(blue * scaleAlpha);

 

    }

 

    StraightRGB.Source = straightRGBImage;

    PremultipliedAlpha.Source = premultipliedAlphaImage;

}

 

Output

Launch the program and tap the acquire button.  I’ll let you guess which one is right.

image

 

Summary

Premultiplied alpha is pretty straightforward, but it will cost performance in tight loops.  In most cases we won’t have to worry about this because the images we typically deal with don’t have transparencies.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2004%20-%20PremultipliedAlpha.zip

Up next: A Brief History of Color





Phone Vision – 03 Encoding Color

10 01 2011

Last time we learned how to extract the individual color values from a WriteableBitmap.  While this will allow us to analyze the image, we haven’t modified it in any way.  If we want to modify the color of the pixel in a meaningful fashion, we need to revisit the pixel format for a WriteableBitmap.

 

Encoding Color Components

Recall the format for the pixel is

image_thumb4

The assumption here is that we have the separated alpha, red, green, and blue components, but we want to combine them into the ARGB32 pixel format.  So, how do we encode those colors properly?  We can simply left shift each component value the correct number of bits then add them together.  Huh?

What we really want to do is something like this:

image

+

image

+

image

+

image

=

image_thumb4

Another Bitwise Operator

Left Shift

How can we shift the bits into their correct positions?  I bet you know where I’m headed with this – the << (LEFT SHIFT) operator:

operator input shift output
<< 1 1 10
<< 1 2 100
<< 1 3 1000
<< 1 4 10000

It should be relatively easy to see that the appropriate shifts are:

component shift
blue 0
green 8
red 16
alpha 24
 

Using our new found knowledge we can rewrite this as:

int pixel = (alpha << 24) + (red << 16) + (green << 8 ) + blue;

Cake.

 

A Note About Or

Above we used simple addition to combine the shifted color values, however, the traditional technique for combining these is the | (OR) operator.  Here is the table of outputs for |:

operator bit one bit two output
| 0 0 0
| 0 1 1
| 1 0 1
| 1 1 1

How can we use this?  Well, if we | a bit with 0 we get the value of the original bit.  (i.e. 1|0=1 and 0|0=0 – original bit in bold).  This means that we can get the exact same results above with this line of code:

int pixel = (alpha << 24) | (red << 16) | (green << 8 ) | blue;

Why would we want to do this?  My gut says: performance.  In my experimentation, however, it turns out to be pretty minor.  For 100,000,000,000 iterations (that’s 100 billion) it took 17 minutes 48 seconds for the + operation and 17 minutes 33 seconds for the | operation (on my dev machine).  So yes, the | operator is slightly more efficient, but barely.

I use the | version in my code, because image processing is intense and even small performance gains add up when you are looping through hundreds of thousands of pixels.

 

Transparency

It’s important to note that the above code only works correctly for alpha = 255. Since we are working with images this is not typically an issue.  In the next episode we will discuss how to handle transparency.

Summary

Encoding a color from its components is about as easy as we’d expect it to be assuming you understand some fundamentals of computer science.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2003%20-%20EncodingColors.zip

Up next: Premultiplied Alpha

 





Phone Vision – 02 Extracting Color

6 01 2011

Last time we set up a very simple project that will allow us to display and manipulate image data using a WriteableBitmap.  Before we can do anything cool though, we need to understand a few things about the WriteableBitmap class.

Pixel Collection

The WriteableBitmap has an integer array called Pixels that represents the 2D texture of the bitmap.  In most cases we will be looping through Pixels (using PixelWidth and PixelHeight) to perform our operations.  You can find the data for a pixel located at (x,y) in the WritableBitmap wb  with:

int color = wb.Pixels[x + y * wb.PixelWidth];

Without knowing how the color is stored we really can’t do much with this data.  Let’s take a peak under the hood.

Pixel Format

The format used by the Silverlight WriteableBitmap is ARGB32 (premultiplied RGB – we’ll cover this later).  This means that the color is represented by a 32-bit integer with the following format:

image

As you can see, each channel (alpha, red, green, and blue) uses 8 bits, giving a possible 28 or 256 intensities for each (0-255). 

 

Extracting Color Components

Extracting each component intensity can be done by masking the color value with 0xFF then shifting the number right by 8 bits.  The heck you say?

Bitwise Operators

And

As their name implies, bitwise operators operate on one bit at a time.  The operator we are concerned with for now is & (AND).

operator bit one bit two output
& 0 0 0
& 0 1 0
& 1 0 0
& 1 1 1

Notice in the & operator when a bit is 0 the output bit will always be 0.  Wherever there is a 1 in a bit, the result is always the value of the other bit.  This trait will allow us to create what is called a mask (or bitmask).  We cover the bits we don’t care about with 0.  So the mask 0xFF above in binary is…

image

& this with the pixel and we will get the value for the last 8  bits of the color.

Let’s work with an example.  I randomly picked #FFDB91D6.  #FFDB91D6 in binary:

image

&

image

=

image

 

Voila.

  

 

Right Shift

 

If we were to use the & alone we could only recover the values for the blue channel.  If only we could shift the color bits to match our mask…   Turns out there is another handy bitwise operator for doing just that: >> (RIGHT SHIFT)

 

operator input shift output
>> 10001000 1 01000100
>> 10001000 2 00100010
>> 10001000 3 00010001
>> 10001000 4 00001000

 

Notice how the bits shift right?  This is equivalent to dividing by powers of 2.  Don’t let the leading zeros confuse you.  If you divide 1120 by 10 (in base 10) you get 0112.  If you divide again you get 0011 (drop the remainder).  See?

 

image

>> 8

=

image

 

Now the green bits match our mask.  Hooray!

 

Full Example

 

Let’s run through our example from above (#FFDB91D6) in its entirety:

 

1.) First, let’s get the blue bits:

 

image

&

image

=image

 

 

2.) Shift the bits right 8 places:

 

image

>> 8

=

image

 

 

3.) Recover the green bits:

 

image

&

image

=

image 

 

 

4.) Shift the bits right another 8 places:

 

image

>> 8

=

image

 

 

5.) Extract the red bits:

 

image

&

image

=image

 

 

6.) Last shift!!!

 

image

>> 8

=

image

 

 

7.) Extract the alpha bits:

 

 image

=image

 

 

We don’t have to mask the last value because it’s already exactly what we need.  Let’s see how this plays out in code.

 

int pixel = (int)0xFFDB91D6;

byte blue = (byte)(pixel & 0xFF);

pixel >>= 8;

byte green = (byte)(pixel & 0xFF);

pixel >>= 8;

byte red = (byte)(pixel & 0xFF);

pixel >>= 8;

byte alpha = (byte)(pixel);

Quite elegant and very efficient.

 

Summary

Recovering colors from the WriteableBitmap is pretty straightforward once you understand the pixel structure and how to mask them.

Download Code

http://cid-88e82fb27d609ced.office.live.com/embedicon.aspx/Blog%20Files/PhoneVision/PhoneVision%2002%20-%20ExtractingColors.zip

Note:  This code doesn’t actually modify the image in any way.  We will do that in the next installment.

 

Up next: Encoding Color