I wanted to write about premultiplied alpha, but realized I need to explain some things about filtering first. And then I realized that to properly explain filtering, I need to start with an even more fundamental theoretical question: what is a pixel?
From Wikipedia:
"A pixel is the smallest item of information in an image. Pixels are normally arranged in a 2-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image, where more samples typically provide more-accurate representations of the original."
That's not a bad definition, but it's a little vague, especially the "often represented using dots or squares" part. Trouble is, when we try to make this more precise, we realize there are actually several different kinds of pixels in common use!
Which version is true?
Why does this matter?
Much of the time it does not. Many people write code assuming the mathematically elegant zero sized pixels of #2, and this often works fine even though it doesn't truly match their hardware.
One time it does matter is when quantizing an image, by converting analog to digital, or shrinking a digital image to a lower resolution. Any time we do such things, we must choose how the new pixel values will be calculated, which requires a specific definition of exactly what a pixel is.
Another time this matters is when scaling a digital image up to a higher resolution. In some ways this is impossible. We are trying to add more pixels, but these pixel values aren't stored in the source image, and we can't recreate missing data out of thin air! But this is too common and useful an operation to just throw our hands in the air and give up because of a minor technical issue like the problem being fundamentally unsolvable :-) So, we guess. The better we guess, the better the resulting scaled image will look. The more we know about how the source image was created, the more accurate a guess we can make, but in practice we usually know very little, as images don't typically include data describing what pixel falloff curves they were created from.
So we muddle through with no real standard for what a pixel actually is, and everybody just makes the best guesses they can. Or more often, ignores the question entirely...