Color Vision 1.0 Live!

28 02 2011

Can’t tell if your socks match? Wonder what it’s like to be colorblind? Color Vision lets you do just that. Head to work confident knowing your outfit is coordinated or simply explore life through the eyes of those whose cones have been shortchanged. Use Color Vision to mimic colorblindness, identify colors, or adjust them to reveal the world.

Color Vision went live on the Windows Phone 7 Marketplace this weekend.  This app isn’t just awesome.  It’s free!

Learn more or Download it.

Phone Vision 06 – RGB Color Intensities

21 01 2011

I enjoyed our last blog’s bit of history, but how can we make something more useful than a spinning top?  It turns out that the fundamental principle Maxwell described for color photography is still in use today.  Though the medium is different, for all intents and purposes we are still taking three pictures – one with a red filter, one with a green filter, and one with a blue filter.  Those images are then combined to form a color photograph. 

Color Intensity

When we take a picture, we are just measuring the intensity of light at a given point.  With a sensor alone we would only be capable of creating a map of intensities.  This is definitely useful in some situations, but it ignores oodles of information humans can decipher naturally – color.

Using a filter that only lets through a specific color, we  can measure the intensity of that color.  Remember, we need at least three distinct colors to create the gamut of colors our eyes can distinguish.  To capture three different colors we need three different filters.

In the tender, early years of color photography, three separate pictures had to be taken.  I can’t imagine it was easy to capture a moving target.  Some enterprising people came up with a way to split the light into the three different wavelengths and then aim those beams at three separate light sensitive plates.  Clever as it may be, it can also be expensive and bulky.  There has to be a better way.

Bayer Filter

Conceptually, this next idea is very simple.  From a manufacturing standpoint, I’m not so certain.   Regardless, most modern digital cameras use a single image sensor covered by a grid of color filters arranged in what’s called a Bayer pattern (named after its inventor Bryce Bayer).


The astute observer might notice that there are twice as many green filters as there are red or blue.  Wikipedia says this is because humans have a heck of a lot of rod cells (light sensing) and they are most sensitive to green light (~498 nm if you care).

In its raw form, the output of the camera above could be thought of as 3 separate images with with missing information:



Converting this raw data into a human friendly image is a process called demosaicing.  There are lots and lots and lots of ways to accomplish this.  They range from the very simple – like treating a 4×4 group as one pixel by averaging the green values – to a little more complex – like filling in the gaps by averaging neighboring pixels – to way beyond the scope of this blog.

Artificial Bayer Filter

The images we deal with are already demosaiced.  For this lesson we want to mess around with a raw image, but we don’t have one.  With that in mind, we are just going to have to “de-demosaic” an image.  For the color [255,128,64], the “de-demosaic” transform looks something like this:



Notice that our width will be twice the original width and our height will be twice the original height.  This will quadruple the size of the image.  Another important piece is the original coordinate (x,y) becomes 4 separate coordinates in the new image:


Let’s see what we can do.

//This assumes you already have the image in

//WriteableBitmap form. Refer to earlier lessons

//if you’re not sure how to do this.

WriteableBitmap ToBayerRaw(WriteableBitmap bmp)


    //we need to double the size of the target bitmap

    WriteableBitmap bayerRaw =

        new WriteableBitmap(bmp.PixelWidth * 2, bmp.PixelHeight * 2);


    //we loop through every column, row by row.

    //the two loops are not necessary, but

    //they demonstrate the concept easier

    //than a single loop

    for (int y = 0; y < bmp.PixelHeight; y++)


        for(int x = 0; x < bmp.PixelWidth; x++)


            //first recover the RGB pixel data

            int pixel = bmp.Pixels[x + y * bmp.PixelWidth];


            // remember that (x,y) translates to 4 coordinates

            // g1 = (2x, 2y)

            // b = (2x+1, 2y)

            // r = (2x, 2y+1)

            // g2 = (2x+1, 2y+1)

            // also note that we are using the new pixel width

            // as our offset

            // notice the masks in use as well


            //upper left green


                [2*x + 2 * y * (bayerRaw.PixelWidth)]

                = (int)(pixel & 0xFF00FF00);


            //upper right blue


                [2 * x + 1 + 2 * y * (bayerRaw.PixelWidth)]

                = (int)(pixel & 0xFF0000FF);


            //lower left red


                [2*x + (2 * y + 1) * (bayerRaw.PixelWidth)]

                = (int)(pixel & 0xFFFF0000);


            //lower right green


                [2*x + 1 + (2 * y + 1) * (bayerRaw.PixelWidth)]

                = (int)(pixel & 0xFF00FF00);



    return bayerRaw;





Since half of the pixels are green and only a quarter are red and a quarter are blue, we would expect a green tint.  Matrixesque, am I right?


So now we have a raw file to work with, let’s see if we can fill in some of the “gaps” we’ve created.  There are quite a few techniques to accomplish this, but we are going to ignore them and try to figure them out on our own.   Let’s start with the red pixels:


The first thing I notice is that some of ? pixels have immediate neighbors with values so let’s tackle those first.  My gut says that averaging them would yield a decent approximation:


This gives us value for 8 of the missing pixels leaving 4 behind:


If we ran through the image one more time, we could fill in the remaining gaps.  On the second run the new top, bottom, left, and right values would determine the new value of the missing pixels:


Wait a minute… Top, bottom, left and right were also the average of the original pixels.  Can we use that?  Let’s label a set of original pixels w, x, y, and z.


That means that top, bottom, left, and right are defined as:





This simplifies rather nicely (I left out a couple steps):


What I like to call the average of the original four pixels.  This is intuitive for a lot of us, but it’s nice to have some math to back it up.

Putting it all Together

WriteableBitmap Demosaic(WriteableBitmap bmp)


    WriteableBitmap interpolatedBmp =

        new WriteableBitmap(bmp.PixelWidth, bmp.PixelHeight);

    // we are going to cheat and ignore the boundaries

    // dealing with the boundaries is not difficult,

    // but the code is long enough as it is

    for (int y = 1; y < bmp.PixelHeight – 1; y++)


        for (int x = 1; x < bmp.PixelWidth – 1; x++)


            //first we are going to recover the pixel neighborhood.

            //the mask at the end is simply to get rid of the alpha


            //middlecenter is the pixel we are working with (x,y)

            int topleft =

                bmp.Pixels[x – 1 + (y – 1) * bmp.PixelWidth] & 0xFFFFFF;

            int topcenter =

                bmp.Pixels[x + (y – 1) * bmp.PixelWidth] & 0xFFFFFF;

            int topright =

                bmp.Pixels[x + 1 + (y – 1) * bmp.PixelWidth] & 0xFFFFFF;


            int middleleft =

                bmp.Pixels[x – 1 + y * bmp.PixelWidth] & 0xFFFFFF;

            int middlecenter =

                bmp.Pixels[x + y * bmp.PixelWidth] & 0xFFFFFF;

            int middleright =

                bmp.Pixels[x + 1 + y * bmp.PixelWidth] & 0xFFFFFF;


            int bottomleft =

                bmp.Pixels[x – 1 + (y + 1) * bmp.PixelWidth] & 0xFFFFFF;

            int bottomcenter =

                bmp.Pixels[x + (y + 1) * bmp.PixelWidth] & 0xFFFFFF;

            int bottomright =

                bmp.Pixels[x + 1 + (y + 1) * bmp.PixelWidth] & 0xFFFFFF;


            int blue = 0;

            int red = 0;

            int green = 0;


            // if we are on an even row and an even column

            // like (2, 2) then we are on a green pixel

            // we need to average top center and bottom

            // center for red we need to average left

            // middle and right middle for blue

            if (y % 2 == 0 && x % 2 == 0)


                red = (((topcenter >> 16) +

                    (bottomcenter >> 16)) / 2) << 16;

                blue = (middleleft + middleright) / 2;

                green = middlecenter;



            // if we are on an even row and an odd column

            // like (2, 1) then we are on a blue pixel

            // red is an average of top left, top right,

            // bottom left, and bottom right

            // green is top center, bottom center,

            // left middle, and right middle

            else if (y % 2 == 0 && x % 2 == 1)


                red = (((topleft >> 16) +

                    (topright >> 16) +

                    (bottomleft >> 16) +

                    (bottomright >> 16)) / 4) << 16;

                blue = middlecenter;

                green = (((topcenter >> 8 ) +

                    (bottomcenter >> 8 ) +

                    (middleleft >> 8 ) +

                    (middleright >> 8 )) / 4) << 8;



            // if we are on an odd row and an even column

            // like (1, 2) then we are on a red pixel

            // blue is an average of top left, top right,

            // bottom left, and bottom right

            // green is top center, bottom center,

            // left middle, and right middle

            else if (y % 2 == 1 && x % 2 == 0)


                red = middlecenter;

                blue = (topleft +

                    topright +

                    bottomleft +

                    bottomright) / 4;

                green = (((topcenter >> 8 ) +

                    (bottomcenter >> 8 ) +

                    (middleleft >> 8 ) +

                    (middleright >> 8 )) / 4) << 8;



            // otherwise we are on an odd row and odd column

            // like (1,1). this is a green pixel

            // red left middle + right middle

            // blue is top center + bottom center



                red = (((middleleft >> 16) +

                    (middleright >> 16)) / 2) << 16;

                blue = (topcenter + bottomcenter) / 2;

                green = middlecenter;



            interpolatedBmp.Pixels[x + y * interpolatedBmp.PixelWidth]

                = (255 << 24) | red | green | blue;



    return interpolatedBmp;



With a little luck, our image should look pretty close to the original.




On the left you’ll find the original followed by the Bayer filter in the middle and, finally the demosaiced is on the right.  I think we did alright, eh?  This image is large, much larger than the viewing space.  If, however, you were to perform the same procedure on an image that is smaller than the viewing space, you would most likely see artifacts from the resizing.


Hopefully you learned a little about how cameras actually work and maybe a little math.  It wasn’t too hard, but trying to keep all those colors straight can make the code messy.

Download Code

Up next: Grayscale

follow @azzlsoft

Phone Vision 05 – A Brief History of Color

18 01 2011

You should have a pretty solid grasp of how to effectively manipulate pixels with images.  I guess now we should talk a little bit about color.  Why, after all, do we pick red, green, and blue as our primary colors from which to build things?



  • 1666 – Isaac Newton demonstrates white light is a combination of colors.
  • 1802 – Thomas Young postulates the existence of three types of photoreceptors in human eyes.
  • 1849 – James David Forbes experiments with rapidly spinning tops creating the illusion of a single color out of a mixture.
  • 1850 – Herman von Helmholtz classifies human color receptors as short, medium, and long corresponding to the wavelengths they are sensitive to.
  • 1861 – James Clerk Maxwell demonstrates a color photographic process using red, green, and blue light.

It turns out that the long, medium, and short photoreceptors (cones) in our eyes roughly correspond to red, green, and blue, respectively.  When emitting light, these three colors can be combined in various ways to produce all of the colors most humans are capable of seeing.  There are several color spaces we can use to generate human-friendly colors, but today we will focus on RGB.


The Forbes Top
(I made this name up)

Clearly, techniques for capturing and displaying color have been refined over the last few centuries.   We now enjoy color televisions, LCD screens, web cams, and so-called ‘retina displays’ to name a few.  Today, we are going to take a step back and try to simulate the rapidly spinning tops that James Forbes first experimented with.  The results will be somewhat crude, but they should demonstrate the concepts.

The most important component of this application is actually the design of the colorized circle that will represent our top.  I tried several different approaches before settling on 36 different slices (12 per color) with the colors alternating.  Coupling this with an adjustable rotation, an illusion of color can be established.

Here is the final product with red = 0, blue = 255, and green = 128:



My attempts at a screen capture video failed.  You’ll just have to download the code and try it yourself.  The only tricky component is actually drawing the wheel so I will walk through that piece.  As always, the rest of the code will be downloadable below.


Draw a Pie

private void DrawPie()


    //drawing board is just a canvas

    //let’s start with a clean slate




    //red, green, and blue are member

    //variables adjusted with sliders

    double colorsum = red + green + blue;

    // a circle has 360 degrees or 2*Pi radians

    // we need to divide that into the different color

    // components.

    double redAngle = 2* Math.PI * (red / colorsum);

    double greenAngle = 2* Math.PI * (green / colorsum);

    double blueAngle = 2* Math.PI * (blue / colorsum);


    // this part is tricky

    // I am essentially doing a gamma function

    // to try to keep the intensities close

    // I eyeballed it so it’s not 100% accurate

    byte intensity =


        Math.Pow((red * .33

            + green * .34

            + blue * .33), 1.2), 255);



    Point start = new Point(0, 100);

    Random random = new Random();

    // we are going to divide our top

    // into 36 slices — 12 per color

    const int numSlices = 36;

    for (int i = 0; i < numSlices; i++)


        SolidColorBrush brush = new SolidColorBrush(Colors.Black);

        double angle = 0;

        switch(i % 3)


            case 0: //red

                angle = redAngle / (numSlices / 3);

                brush = new SolidColorBrush(

                    Color.FromArgb(255, intensity, 0, 0));


            case 1: //green

                angle = greenAngle / (numSlices / 3);

                brush = new SolidColorBrush(

                    Color.FromArgb(255, 0, intensity, 0));


            case 2: //blue

                angle = blueAngle / (numSlices / 3);

                brush = new SolidColorBrush(

                    Color.FromArgb(255, 0, 0, intensity));





        // this code was essentially lifted from the Silverlight forums


        // Yi-Lun Luo posted it as a method for drawing pie graphs

        // I modified it for the color top

        // I will warn you that if your angle is > 90 degrees

        // you will most likely run into trouble

        Path path = new Path();

        PathGeometry geometry = new PathGeometry();

        PathFigure figure = new PathFigure();

        // The center point is 0,0.

        figure.StartPoint = new Point(0, 0);

        LineSegment line1 = new LineSegment();

        // Draw a line from the center of the circle to the start

        // point of the sector’s arc.

        line1.Point = start;

        figure.Segments = new PathSegmentCollection();


        ArcSegment arc = new ArcSegment();

        arc.Size = new Size(100, 100);

        arc.SweepDirection = SweepDirection.Clockwise;

        Point p = start;

        //Perform a rotate on the start point of the arc

        // to compute the end point.

        p.X = start.X * Math.Cos(angle) – start.Y * Math.Sin(angle);

        p.Y = start.X * Math.Sin(angle) + start.Y * Math.Cos(angle);

        //The next sector’s arc will begin from this one’s end point.

        start = p;

        arc.Point = p;


        LineSegment line2 = new LineSegment();

        // Draw another line from the sector’s end point

        // to the center of the circle.

        line2.Point = new Point(0, 0);


        geometry.Figures = new PathFigureCollection();


        path.Data = geometry;

        // Assign a random color to this sector.

        path.Fill = brush;


        // Since our circle ranges from -100,-100 to 100,100,

        // we’ll set the Path’s Left and Top to show all the graph.

        path.SetValue(Canvas.LeftProperty, 100.0);

        path.SetValue(Canvas.TopProperty, 100.0);







The most useful piece from this lesson might be the code for drawing a pie graph, but a bit of history is always fun.  The bottom line is that RGB comes from biology.  Our eyes are sensitive to those colors, so if we use them to mix we should be able to create images that mimic real life.


Download Code


Up Next: RGB Color Intensities