What is a pixel?

Originally posted to Shawn Hargreaves Blog on MSDN, Tuesday, August 25, 2009

I wanted to write about premultiplied alpha, but realized I need to explain some things about filtering first. And then I realized that to properly explain filtering, I need to start with an even more fundamental theoretical question: what is a pixel?

From Wikipedia:

"A pixel is the smallest item of information in an image. Pixels are normally arranged in a 2-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image, where more samples typically provide more-accurate representations of the original."

That's not a bad definition, but it's a little vague, especially the "often represented using dots or squares" part. Trouble is, when we try to make this more precise, we realize there are actually several different kinds of pixels in common use!

  1. A pixel is a tiny square. Images are grids of tightly packed pixels, with no gaps or overlap between adjacent squares.

  2. A pixel is a geometric point of zero size. When a continuously varying analog source image is converted to digital format, its color is sampled at each pixel location. Color values from in between these locations are not recorded.

  3. A pixel is a circular dot with soft edges. These dots may or may not be larger than the pixel grid spacing: if they are larger, adjacent pixels will overlap. When digitizing an image, all colors within the pixel region are combined, using a falloff curve that weights their contribution according to distance from the pixel center.

Which version is true?

  1. This matches what we see in programs like Photoshop, where we can zoom into an image and see the pixels scale up to large squares. It also matches what we are used to from early computers and game machines, where low resolutions had large and blocky pixels. But it's not really true in most cases today. 

  2. Matches how mathematicians like to view the world, but no actual hardware works like this.

  3. Digital cameras, scanners, monitors, and printers all work this way, or some variant of it. Unfortunately there is no consistency about how much pixels overlap or what falloff curve is used. Some devices vary the concept even further, for instance an LCD uses separate dots with different center locations for the red, green, and blue color components.

Why does this matter?

Much of the time it does not. Many people write code assuming the mathematically elegant zero sized pixels of #2, and this often works fine even though it doesn't truly match their hardware.

One time it does matter is when quantizing an image, by converting analog to digital, or shrinking a digital image to a lower resolution. Any time we do such things, we must choose how the new pixel values will be calculated, which requires a specific definition of exactly what a pixel is.

Another time this matters is when scaling a digital image up to a higher resolution. In some ways this is impossible. We are trying to add more pixels, but these pixel values aren't stored in the source image, and we can't recreate missing data out of thin air! But this is too common and useful an operation to just throw our hands in the air and give up because of a minor technical issue like the problem being fundamentally unsolvable :-) So, we guess. The better we guess, the better the resulting scaled image will look. The more we know about how the source image was created, the more accurate a guess we can make, but in practice we usually know very little, as images don't typically include data describing what pixel falloff curves they were created from.

So we muddle through with no real standard for what a pixel actually is, and everybody just makes the best guesses they can. Or more often, ignores the question entirely...

Blog index   -   Back to my homepage