Remember when you first figured out Santa Claus wasn't real? The growing doubt, tempered by the fact that all your friends believed in him, and surely they can't ALL be wrong, then the gradual realization that everybody was in fact wrong...
Well, I've got another one for you: the way most people do alpha blending is bogus!
At a fundamental level, alpha has no special meaning at all. Graphics cards manipulate numeric values, but what they do with these values is entirely determined by what shader code you write and which render states you set. The hardware works with Vector4 data types. Because these often represent colors, it is natural to use the first three components for red, green, and blue. That leaves a spare component, called alpha, which can be used for absolutely anything we like.
Some people use alpha to store shininess, or ambient occlusion, or a collision material ID. In the MotoGP particle system I used it to pass a per-particle random number into my vertex shader physics. But most often, alpha is used to represent transparency. It is important to understand that this is just a convention, and there are several different ways it can be done.
The majority of programmers, programs, programming APIs, file formats, etc, define transparency as:
In math:
blend(source, dest) = (source.rgb * source.a) + (dest.rgb * (1 - source.a))
In code:
RenderState.SourceBlend = Blend.SourceAlpha; RenderState.DestinationBlend = Blend.InverseSourceAlpha;
In this world, RGB and alpha are independent. You can change one without affecting the other. Even when an object is fully transparent it still has the same RGB as if it was opaque. Thus 100% transparency can be represented by many different color values.
There isn't really a direct physical analogy for this. I guess it's similar to how a magic cloak of invisibility works in a fantasy universe, where I can be wearing the same red sweater as I am right now, and it remains red even though it currently happens to be invisible.
Here's a different way to think about transparency:
In math:
blend(source, dest) = source.rgb + (dest.rgb * (1 - source.a))
In code:
RenderState.SourceBlend = Blend.One; RenderState.DestinationBlend = Blend.InverseSourceAlpha;
In this world, RGB and alpha are linked. To make an object transparent you must reduce both its RGB (to contribute less color) and also its alpha (to obscure less of whatever is behind it). Fully transparent objects no longer have any RGB color, so there is only one value that represents 100% transparency (RGB and alpha all zero).
This is more like how light behaves in the real world. What is the RGB of my car windscreen? None: it is transparent, so has no color. How about my sunglasses? These have a fractional alpha value (letting some light some through, while blocking some) and also contribute some RGB for that nice rose-tinted glow.
To use premultiplied alpha, in addition to setting the appropriate renderstates, you must also convert your source graphics into premultiplied format. Drawing a non premultiplied color with premultiplied blending will not give sensible results!
To convert a non premultiplied color into premultiplied format:
color.rgb *= color.a
(hence why this is called "premultiplied" format)
Look at the blend equations for conventional vs. premultiplied alpha. If you substitute this color format conversion into the premultiplied blend function, you get the conventional blend function, so either way produces the same end result. The difference is that premultiplied alpha applies the (source.rgb * source.a) computation as a preprocess rather than inside the blending hardware.
I will write more about how to convert graphics into premultiplied format in a later post.
Premultiplied alpha is better than conventional blending for several reasons:
Remember how filtering can produce ugly fringes around the edges of alpha cutouts? Not a problem when using premultiplied alpha! Revisiting the example from my previous post:
tree = (0, 255, 0, 255)
border = (0, 0, 0, 0)
background = (0, 0, 255)
filtered = (tree + border) / 2 = (0, 128, 0, 128)
With conventional alpha blending, the result is darker than we wanted:
result = lerp(background, filtered.rgb, filtered.a) = (0, 64, 128)
But premultiplied blending produces the right answer:
result = filtered.rgb + (background * filtered.a) = (0, 128, 128)
This works because texture filtering and premultiplied blending are both linear transforms, and are therefore associative:
blend(filter(a, b), c) == filter(blend(a, c), blend(b, c))