Images with transparency are commonly stored in one of two ways:
Straight Alpha is straightforward. It's commonly used for "everyday" stuff like print and graphic design and on webpages.
Premultiply Alpha (PMA) is specialized, can render a bit faster, and it has other properties that are beneficial for graphics in games and for VFX compositing.
For each of these ways of storing, there's a corresponding way of rendering. Straight alpha blending and PMA blending.
In most Spine runtimes, PMA rendering is recommended. Some runtimes give you a choice, others don't.
Unfortunately, this piece of information (whether you saved Straight or PMA) is not stored in the image file itself.
So your game engine or program cannot tell whether your image was saved as Straight Alpha or Premultiply Alpha without your intervention. You have to make sure of this yourself:
Straight Alpha images need to be rendered as Straight Alpha.
Premultiplied Alpha images need to be rendered as Premultiply Alpha.
Here are two common problems when they don't match up**:
BLEEDING INTO TRANSPARENT AREAS
Exported Straight, rendered PMA.
You get thick colored borders or weird colored bands (from bleed***) where the areas are supposed to be transparent. The bands will normally extend to form rectangles or follow the contours of polygons/meshes.
BLACK BORDERS
Exported PMA, rendered Straight.
You get black edges
==========================================================================
** If you see either of these weird things happening, make sure the settings in both the TexturePacker and in the game engine match up. Note that sometimes, your framework/game engine/SDK will do additional processing before importing images into your project so be sure to check import settings too. For example, Unity does additional processing to imported texture assets.
*** Depending on import settings, some game development kits may add bleed to your images.
For example, Unity3D adds bleed when the Alpha is Transparency
checkbox is checked or when the "Sprite" texture type is used.
Also note that game image assets saved with premultiplied alpha will look like it has black edges when you open them in regular image viewers. Don't worry. That reflects what data is stored in the image, but in your game, these will render cleanly as long as you set it up correctly. This is because most image viewers assume images are saved normally (with straight alpha).
Using premultiplied alpha also has other benefits:
- Normal and Additive blended images/textures can be done by premultiply alpha, so you can use one shader and it won't break batching.
- In some situations where mipmaps and certain kinds of common filtering are involved, using premultiply alpha will be rendered cleaner than straight alpha.
The part that follows is more of trivia, for the amusement of people who don't normally touch shader programs:
The term "premultiply alpha" itself is related to the math behind image compositing.
Each pixel in an image is represented by 4 channels. The colors (RGB) and opacity (alpha).
The graphics processing math here works by understanding each color channel as a (floating point) a fraction or portion of 1. Normally, this is a decimal number between 0 and 1 inclusive. This scheme is very common.
(not values from 0
to 255
, or 00
to FF
, as you might expect from using graphics programs).
Rendering logic uses this data to know how to put images on top of each other correctly.
It does this by knowing the color that was already there (the existing color— the color that was already in the frame buffer) and knowing the color it needs to put on top (the incoming color— the color coming from the mapped texture).
For opaque things, it's simple. No transparency. No problem. Ignore the color that was already there and only use the incoming color.
FinalColor.rgb = Incoming.rgb
But if there's a bit of transparency defined by the alpha channel, you need to take a little bit of both colors, then
One of the standard algorithms to do this is Straight Alpha (or Post-multiplied alpha) blending:
FinalColor.rgb = (Incoming.rgb * Incoming.a) + (Existing.rgb * (1 - Incoming.a));
The other is Premultiplied Alpha blending:
FinalColor.rgb = (Incoming.rgb) + (Existing.rgb * (1 - Incoming.a));
Notice how Premultipled alpha is doing one less multiplication.
That multiplication step was done when the image was saved, instead of at render time, so it skips it at render time.
This is why you notice black outlines when viewing our premultiplied images in normal image viewers. The more transparent pixels are (with lower alpha) the darker they get. Basically, any alpha below 1.0 will get premultiplied with a number less than 1 and thus become smaller (and result in a darker color). Ultimately, multiplying an alpha of 0 means the rest of the color channels will also become 0.
Those three less multiplication operations done per pixel means it renders/composites a little bit faster. But more than that, it also has benefits with certain texture filtering situations, and you can make it blend additively without changing the render state or breaking batching.
Most frameworks will usually come with some common pre-built shaders, or give you access to standard graphics library blend functions (OpenGL and DirectX), so you can tell it yourself how you want blending to happen.
In OpenGL, this is glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)
More info:
http://www.cgdirector.com/quick-tip-straight-alpha-vs-premultiplied-alpha/
https://developer.nvidia.com/content/alpha-blending-pre-or-not-pre
https://vimeo.com/11064139
More on Premultiply Alpha(PMA) vs Straight Alpha
Nate ha scrittoSome back story so everyone is sure to follow along: Sometimes image pixels don't map 1 to 1 to screen pixels, eg when an image is scaled or rotated or placed between integer coordinates. This happens basically all the time for skeletal animation. When it happens, "filtering" is used to determine what to show on the screen. "nearest" filtering chooses the closest pixel. This can produce jagged edges, which is OK for retro style pixel art. More often "linear" filtering is used, which averages surrounding pixels to produce a smoother image.
So, linear filtering will average the color of surrounding pixels right? What happens when some of the surrounding pixels have an alpha of zero? They still get averaged! This means the color of the pixels you can't see (alpha of zero) affect the color of the pixels you can see. Many image editing programs use an RGB of 0,0,0 for pixels with an alpha of zero, which results in the edges of your images being tinted black when linear filtering kicks in.
One way to (mostly) get around this without using PMA is to use Spine's bleeding feature. However, the problem is gone completely when using PMA.
Some more detailed reading:
http://blogs.msdn.com/b/shawnhar/archive/2009/11/02/texture-filtering-alpha-cutouts.aspx
http://blogs.msdn.com/b/shawnhar/archive/2009/11/06/premultiplied-alpha.aspx