Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Subdivision Geometry Shader

Discussion in 'Shaders' started by DerWoDaSo, Apr 19, 2017.

  1. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    Hey everybody,

    I have a problem with a geometry shader I am using to subdivide triangles for Lonely Mountains: Downhill. I don't want interpolation across the triangles, since I want to have a uniform color and hard edges per final triangle (imagine a lowpoly mesh). The following GIF explains it I hope, but I will go through the (a bit simplified but hopefully more understandable) process below...



    For each triangle three new vertices are created, which lie on the edges of the triangle - something like that:

    Code (CSharp):
    1. float lerpValue = SomeIrrelevantFunction(); // returns a value between 0.3 - 0.7
    2. float lerpValueMinusOne = 1 - lerpValue ;
    3. newVertex01.pos = input[0].pos * lerpValue + input[1].pos * lerpValueMinusOne;
    4. newVertex12.pos = input[1].pos * lerpValue + input[2].pos * lerpValueMinusOne;
    5. newVertex20.pos = input[2].pos * lerpValue + input[0].pos * lerpValueMinusOne;
    * Code is not C# of course...

    2. For each triangle (4 per original tri) I write the data to the OutputStream:

    Code (CSharp):
    1. // Create center triangle:
    2. OutputStream.Append(newVertex01);
    3. OutputStream.Append(newVertex12);
    4. OutputStream.Append(newVertex20);
    5. OutputStream.RestartStrip();
    6.  
    7. // Create triangle connected to vertex[0]
    8. OutputStream.Append(input[0]);
    9. OutputStream.Append(newVertex01);
    10. OutputStream.Append(newVertex20);
    11. OutputStream.RestartStrip();
    12.  
    13. // Create triangle connected to vertex[1]
    14. OutputStream.Append(input[1]);
    15. OutputStream.Append(newVertex12);
    16. OutputStream.Append(newVertex01);
    17. OutputStream.RestartStrip();
    18.  
    19. // Create triangle connected to vertex[2]
    20. OutputStream.Append(input[2]);
    21. OutputStream.Append(newVertex20);
    22. OutputStream.Append(newVertex12);
    23. OutputStream.RestartStrip();
    Everything works fine so far, except the little one-pixel-sized holes appearing mainly at the original edges of the triangles. As far as I can tell these are cause by floating point imprecisions. But how do I solve this problem?

    I tried to extend each new triangle a tiny fraction, which works for most cases. However at low angles and larger distances still quite a lot of hole appear for reasonable values of "dist". With larger values I get heavy antialias problems and z fighting.

    Code (CSharp):
    1. // Push vertex outwards by a tiny fraction of its size (dist < 0.01)
    2. input[0].pos += (input[0].pos * 2 - newVertex01.pos - newVertex20.pos) * dist;
    Other possible solutions? Draw the original mesh again with a small zOffset and zTest set to Greater would solve it, but seems overkill for effectively 0-5 pixels per frame.

    I wonder how it works that regular meshes with split edges (hard edges) render without these problems and what can be done about that in the geometry shader?

    Thanks a lot for reading. I would be grateful for any input! :)

    Jan
     
  2. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
  3. slembcke2

    slembcke2

    Joined:
    Jun 26, 2013
    Posts:
    270
    You run into the same problems when writing tessellation shaders. It's the shader's responsibility to ensure that the geometry has no cracks or "T joins" (I forget the real name) where you place a vertex on the edge of another polygon.

    Can you modify your output so that adjacent triangles subdivide at the same spot?
     
  4. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    I can make them subdivide exactly in the middle (i.e. settings the lerpValue in the code snippet above to 0.5), however that does not fix the problem.

    By the way, that's what the outter part of the main function of my geometry shader looks like:

    Code (CSharp):
    1. [maxvertexcount(48)]
    2. void geom(triangle v2f input[3], inout TriangleStream<v2f> OutputStream)
    3. {
    4.     [...]
    5. }
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Yeah, Unity has no plans to add support for adjacency information, and the problem with hard edges is an issue even with your average tessellation shader. The solution I've seen to dealing with seams from hard edges when doing vertex manipulation or tessellation is to store an averaged normal into either the mesh's tangent or an extra UV set. You can do this either using vertex streams, or creating a new mesh asset, or via an asset preprocessor.

    The other solution is to use a compute shader to create your tessellated mesh and draw procedural to render it, but that requires you pass the vertex information of your mesh to the compute shader manually.
     
  6. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    @bgolus: I didn't really understand how storing the normal in the tangent helps with the "seams". I used that for blending between a hard and smooth mesh which gives some interesting possibilities.
    Example here: https://twitter.com/DrWDSo/status/762718292192727040

    That might be something I could try when I got more time. Thanks for the input. :)
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    I was thinking of the issue being caused by what tessellation is usually used for, which is smoothing out the surface shape, in which case having a split edge will cause a large gap or intersecting geometry. I realize you're just having problems with floating point accuracy from having the edge split at different points, thus the edges don't line up perfectly.

    There are two solutions to that, one is don't split the edges at different points, obviously, but since the edges are split and you're not actually using the same vertices there's a chance they might still not match. The other, ugly, but used-more-often-than-you-think solution is to add some flange geometry to the edges of your original triangle. Basically in your geometry shader when creating your mesh also "extrude" the geometry edges down. You can actually see something like this being done in the recent Horizon: Zero Dawn gif that's been going around.
    https://i.kinja-img.com/gawker-media/image/upload/ucoln8kedwfglsrlxvm5.gif

    Notice how all of the terrain tiles seem to have a constant thickness to them, that's additional geometry around each tile to hide the exact same error you're seeing when two nearly identical but not actually identical edges leave single pixel holes.

    Also if you always want faceted geometry, and never want any smoothed edges, you don't need to use vertex normals at all and you're better off just having all of your meshes use welded vertices / smoothed normals stored in the vertices. This will prevent holes from appearing in shadows when using normal biasing as well.
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Here's a snippet of code to get the surface world normal in the fragment shader using only the world position.

    Code (CSharp):
    1. float facetted_normal( float3 worldPosition )
    2. {
    3.     // get edge vectors of the pixel triangle
    4.     float3 dp1 = ddx( worldPosition );
    5.     float3 dp2 = ddy( worldPosition ) * _ProjectionParams.x;
    6.  
    7.     return = normalize(cross(dp1, dp2));
    8. }
     
  9. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    Yes, the first solution does not really bring any big improvements unfortunately.

    The second - the extension of the triangle - is what I tried with the third code snippet in the first post. It works, but gives some weird artifacts in the distance, but still better than those holes.

    If I use a smooth geometry as input and the geometry shader to generate just facetted normals (calculate normal of the triangle through its position and use that normal for all three vertices), I don't get these issues at all. Maybe the adjacent information is still valid then (dont know much about that so far)?


    For those curious, I uploaded a demo project (currently using Unity 5.6f3):

    http://janbubenik.de/extern/GeomShader.zip
    Open the scene and hit play. There should be some individuals pixels popping up on the plane.
     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Normal biasing uses the normal, so if you're storing the faceted normal in the tangent that would also solve it.
     
  11. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    But that's only for shadow maps, right?
     
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Yes. It's a similar, but different issue since normal biasing is actually pushing the vertices apart (though the single pixel holes can still show up occasionally).
     
  13. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    That looks like the code you're taking about just takes the triangle and slightly enlarging it so the edges overlap slightly. I'm talking about adding additional edge geometry that's extruded down. It's potentially even less efficient than your idea of drawing the tri a second time with an offset.

    Honestly, scaling the triangle up by half a pixel might be the "best" solution for you, but is a little more work than just scaling by dist and some magic number, but it would have fewer artifacts than your current solution.
     
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    I'm curious if you still get holes if you use this instead of the original function.
    Code (CSharp):
    1. newVertex01.pos = (input[0].pos + input[1].pos) * 0.5;
    2. newVertex12.pos = (input[1].pos + input[2].pos) * 0.5;
    3. newVertex20.pos = (input[2].pos + input[0].pos) * 0.5;
    Subtly different than your original code using the lerpValue and 1-lerpValue, and algebraically identical to your original code if lerpValue = 0.5, but potentially more stable due to floating point accuracies.
     
  15. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    I was wrong before, setting the lerpValue to 0.5 fixes the problem (I am also skipping the subdivision for triangles which are further away from the camera, which causes the same issue (T-Junction or what ever it's called) during the transition). So basically I need to figure out how to get that look without the irregular subdivision...

    lerpValue of 0.5 (too regular triangles)
    subdivision_regular.gif

    "random" lerpValue (nice broken up look I want to have)
    subdivision_irregular.gif
     
  16. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    One more idea for a fix: don't split the geometry, just pass the barycentrics and lerpValues for each edge and do it the color split in the shader.
     
  17. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    In the fragment shader, I could try that... but actually I am subdividing it twice, so it all gets a bit more complicated. But I will think about it! Thanks a lot for you help!
     
  18. slembcke2

    slembcke2

    Joined:
    Jun 26, 2013
    Posts:
    270
    Very nice! Almost a painted sort of look to it.

    Is the "precise" keyword supported in Unity? I've used "invariant" in GLSL once to fix a similar issue where different primitives would pass vertexes in a different order and get a slightly different result.

    Also, I have a GLSL shader laying around somewhere in an experiment project that does barycentric subdivision in the frag shader like @bgolus suggested. It even did some fwidth() stuff to AA the edges. It was sort of recursive in that you could just iterate it to find smaller and smaller triangles that contained the current pixel. I'll dig around for it tonight when I get home if I remember.
     
    DerWoDaSo likes this.
  19. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    I don't have too much time right now for the shader, since we are preparing to show the game at the Quo Vadis & A.MAZE in Berlin next week, but here is a little improvement:

    subdivision_keepSharedVertices.gif

    By offsetting only the new vertices and keeping the original vertices in place (e.g. lower left & right), the z fighting and overlapping issues become much less noticeable.

    Offsetting them only half a pixel seems pretty good, however it has to be as cheep as possible, since we are dealing with about 1 million (subdivided) vertices per scene in the end (original vertex count will be around ~50k max).
    I will give that a try and then maybe to some experiments with creating the effect in the vertex shader.

    @slembcke2: if you find anything I would be more than happy to have a look!
     
    bgolus likes this.
  20. slembcke2

    slembcke2

    Joined:
    Jun 26, 2013
    Posts:
    270
    I wasn't. It's probably buried somewhere in the git history of my "experiments" project... on one of computers...

    I was going to try and remember/rewrite it as a shadertoy for fun, but haven't had the time. :-\ Might give it a go this weekend if I have the time.
     
  21. macdude2

    macdude2

    Joined:
    Sep 22, 2010
    Posts:
    686
    The fragment shader will be much more expensive than using the vertex shader to do this, though it would likely solve the problem.
    So if it works for a value of .5, why don't you make a function that just picks from a subset of "clean" floating point values? I'd assume using a lerp value of exactly .25, .4, .3, .35, ect. would also keep from creating pixel error?
     
  22. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Possibly, geometry shaders are actually pretty expensive, even for simple operations. Certainly my suggestion of outputting data from the geometry shader to use in the fragment shader would be more expensive, but if the minimum necessary data could be encoded in the mesh vertices before hand it's possible it would be faster as a fragment shader.

    0.5 works because the edges of adjacent triangles are being split in the exact same spot, so technically any value would work if it could be guaranteed that adjoining edges are split at the same point. The issue is the wanted effect is for the edges to not be aligned.
     
  23. slembcke2

    slembcke2

    Joined:
    Jun 26, 2013
    Posts:
    270
    Yeah, geometry shaders have some weird (hardware specific) performance pitfalls. I wouldn't be so quick to say it's obviously more expensive. Also, given the style the OP is going for, I'm guessing they aren't hurting for ALU cycles, and it's really expensive to not use them. ;)
     
  24. DerWoDaSo

    DerWoDaSo

    Joined:
    May 25, 2009
    Posts:
    131
    Hey,

    We had some pretty good first playtest at the QuoVadis & A.MAZE last week. I will definitely continue on this topic (especially since we got some performance issues with the geometry shader on lower end hardware), but for now I will leave it with the solution above (shifting only the newly created vertices). If you take a screenshot and zoom in you can see minor double edges in some rare cases, but during the actual gameplay nobody (including me) noticed the error.

    Here's a screenshot of the demo we showed there:

    lonelymountains_downhill_007.jpg
     
    JJ_FX, JVaughan and MosesSunny like this.
  25. MosesSunny

    MosesSunny

    Joined:
    May 31, 2017
    Posts:
    1
    That's really great!! But i have one question is it possible to subdivide particular area rather than the whole mesh in unity?