- Modificato
Outline Shader Reproduction In Shader Graph
Hello,
I'm currently making a custom shader to apply to some spine skeleton materials, for convenience, I'm doing it in shader graph.
I found myself with two main problems, which I would like to know if anyone knows the answer to.
The first one involves how Spine generates its outline, as I have two requirements, first it would be to preferably do it all in a single call, and secondly, have that outline be part of the color channel and not something like unlit emission, as I want to give that outline some depth with normals, etc.
I searched online for a while, and the "best" way I found to make an outline involved:
A) Having enough whitespace between each skeleton part in the atlas (solved by the spine export settings by default).
And B) Basically making a copy of the atlas, and offsetting that copy for each side (up, down, left, right, optionally diagonals), combine those offseted copies, have the result be the first thing drawn, and draw the skeleton on top.
The issue I'm having with this is pretty much that the end result can sometimes end with jagged outlines (but it all depends on atlas size + outline width), so I was wondering how the now built in outline handled transparency at the edges to be smooth without AA (rather than a direct solution, I'd appreciate if someone helped me with a breakdown of the outline process, for me to understand and tweak as necessary, and to add outline as a sub graph to reuse later).
The second involves the generated spine mesh, which was optimized to sometimes barely scrape the runtime mesh.
I saw in the forums that manually adjusting the mesh of all attachments, to include transparent pixels, was a recommended approach. I was just wondering if there was a more time efficient way to go about it, as this process will also involve taking care of "invisible" pixel deformation, (because I want to add normals, etc), and ensuring the non outline part of the mesh still looks good.
Worst case scenario, I'll start using attachments with opaque outlines, and swap them for transparent outline versions before exporting, or whatever people suggest.
Kind of a long post, but I'd really appreciate if someone could help me with this. Thank you in advance, and have a nice day.
Arzola ha scrittoThe first one involves how Spine generates its outline, as I have two requirements, first it would be to preferably do it all in a single call
To do it in a single draw call with proper alpha blending (zwrite disabled), you would have to double the geometry for the outline triangles of the first pass, and pass down outline
vs. non-outline
triangle info to the shader via vertex attributes. Alternatively, you can of course double each attachment in Spine as mentioned, and pre-bake the outline in e.g. Photoshop.
Anything without doubling the geometry (either via a second pass or via adding triangles to the buffers) will lead to incorrect inner outlines, i.e. outlines around each attachment instead of only at the outer border of the skeleton. If you are ok with this and also with sacrificing alpha blending quality, you can enable ZWrite and do it in a single pass without doubling the geometry.
Arzola ha scrittoThe issue I'm having with this is pretty much that the end result can sometimes end with jagged outlines (but it all depends on atlas size + outline width), so I was wondering how the now built in outline handled transparency at the edges to be smooth without AA
AA has nothing to do with semi transparent border regions, it's normal alpha blending. The shader basically samples the texture at 4 or 8 neighbouring locations in all directions (offset from the center == outline width), and then the resulting outline alpha value is determined based on how many opaque pixels were found (at 8 neighbourhood this can be 3 at a straight border, vs 1 at a corner). This is done by summing up the surrounding pixel's alpha values. The remaining code in the computeOutlinePixel
function is just adding some customizable parameters and thresholds to it, e.g. stretching result values from alpha 0 - 0.3 to 0 - 1.
Arzola ha scrittoI saw in the forums that manually adjusting the mesh of all attachments, to include transparent pixels, was a recommended approach. I was just wondering if there was a more time efficient way to go about it [..]
If you know it in advance, then it will be much easier to create larger meshes with the proper space around it in the first place.
Arzola ha scritto[..] as this process will also involve taking care of "invisible" pixel deformation, (because I want to add normals, etc), and ensuring the non outline part of the mesh still looks good.
What do you mean by "invisible pixel deformation"? Normals just affect the lighting result, there is no deformation taking place. If you intend to add height maps and parallax occlusion mapping or the like then I still see no problem why adding an outline or enlarging the mesh should cause any problems, any outline pixel should just have a height of 0.
you would have to double the geometry for the outline triangles of the first pass, and...
Hmm, I see, thank you, I'll think about what approach I'll take about this.
AA has nothing to do with semi transparent border regions, it's normal alpha blending.
Yea, thank you, this is more a less the type of correction I wanted someone to make me, as I was out of clues as to how to go about it. It does make sense to see at the adjacent opaque pixels to determine opacity.
What do you mean by "invisible pixel deformation"?
Maybe I was just mistaken, do correct me about this, but my line of thought went something like this:
- I edit the mesh in spine, adding the whitespace to draw the outline (via by attachments with lenient padding, etc.), instead of having the vertices of the meshes be as close to the edges as possible.
- (This, in conjunction with the next "step" might be where I'm wrong) Having added this whitespace, means that while on spine I'm able to see how the non outline parts of the skeleton are deformed, and am able to adjust weights, etc. accordingly, I'm unable to see how the outermost mesh pixels are deformed.
- These deformations, visible or not, will be tied to the runtime UV coordinates that will be used to draw the final mesh, and given how these UV positions are used by both the normal and color map, it would be possible that some animations involve stretching and squashing, which will in turn, involve stretching and squashing the areas of the normal map that are used to draw those specific atlas pixels.
Again, I might be totally wrong in my line of thought, so do correct me if I misunderstood something.
Thank you.
Arzola ha scritto2. (This, in conjunction with the next "step" might be where I'm wrong) Having added this whitespace, means that while on spine I'm able to see how the non outline parts of the skeleton are deformed, and am able to adjust weights, etc. accordingly, I'm unable to see how the outermost mesh pixels are deformed.
Then it would be easiest to add an outline in the input images already (e.g. adding a Stroke effect in Photoshop on all attachment layers), so that you can exactly see where your outline is. Before exporting the final atlas you could then disable the outline again.
Arzola ha scritto3. These deformations, visible or not, will be tied to the runtime UV coordinates that will be used to draw the final mesh, and given how these UV positions are used by both the normal and color map
Just to prevent any misunderstanding in advance: bone weights at each vertex will not affect the UV coords, these stay fixed. It only affects vertex movement, stretching an already mapped triangle differently.
Arzola ha scritto[..] it would be possible that some animations involve stretching and squashing, which will in turn, involve stretching and squashing the areas of the normal map that are used to draw those specific atlas pixels.
Here I don't understand why stretching the normalmap in the transparent (outline) area would cause any different problems than in normal opaque colored areas. Could you please describe what specific problem you see there?