Search Unity

Passing custom vertex attributes to a cg shader

Discussion in 'Shaders' started by Dayman, Jul 23, 2014.

  1. Dayman

    Dayman

    Joined:
    May 9, 2013
    Posts:
    3
    I'm working on a minecraft-style voxel game, and trying to implement a lighting model where each face of a cube has four light values (one at each vertex), and is shaded using an interpolation among those four values. Unfortunately, because each quad face is actually composed of two triangles, if you simply set each vertex color to match its lighting value, you get ugly diagonal artifacts, as in the following image:



    I think that the solution to this problem is to write a custom shader, where each vertex also knows the location and light value of the three other vertices that make up the square, then using bilinear interpolation in the fragment shader to find the properly blended value.

    HOWEVER, I'm pretty new to writing shaders, and I'm having a lot of trouble finding the answer to this question: In my C# code, how do I encode the vertices in the mesh with extra data that can then be accessed in the cg shader? I can't find out how to pass any vertex data other than the standard attributes (normal, texcoord, tangent, etc) to the shader. Is this even possible? If so, if someone could share a link explaining how, it would be a huge help.

    If that's not possible (or even if it is), I would also appreciate any other ideas about alternative ways I could accomplish what I'm trying to do.

    Thanks for your help!
     
  2. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    Per-vertex values (whether "lights" or colors or texture-coordinates) are linearly interpolated in between vertices by the hardware as you probably know.

    Necessarily then, if you insist on those values being per-vertex, you'd need a higher tesselation for the cube to reduce (but not eliminate) the artifact. You cannot (as far as Unity Cg is concerned) overwrite/customize what kind of interpolation is used across vertices.

    Usually the answer would be "use per-pixel lighting and not per-vertex lighting" but you say each vertex "contains a light value" (in the mesh data? so just a vertex color with your own "it's a light" semantic?), so I have no clue what exactly this is meant to accomplish tbh ;)
     
  3. Dayman

    Dayman

    Joined:
    May 9, 2013
    Posts:
    3
    Thanks for the answer metaleap, that's basically what I was afraid of. I am indeed trying to somehow "fake" a different kind of interpolation for quad faces.

    As for what I'm trying to accomplish--the entire game world is procedurally generated, with each vertex and tri defined in code on the fly. It uses a volumetric lighting algorithm to calculate the light value at each vertex, then (ideally) shades each square face with a smooth gradient between its four corners. Adding more geometry is pretty much out of the question, because generating a lot of terrain with a large view distance using this technique already makes tons of tris.

    If that's still not clear, just think of Minecraft. Everything described above is basically exactly how Minecraft works behind-the-scenes (as far as I know).

    The idea of baking additional information into the vertices came after reading this post on Stack Exchange. That guy's dealing with basically the same problem, (though in OpenGL) and one of the recommendation is:
    "you can solve this with a shader. Say you pass an extra attribute per-vertex that corresponds to (u,v) = (0,0) (0,1) (1,0) (1,0) all the way to the pixel shader (with the vertex shader just doing a pass-through)."
    Nobody mentions how one would actually implement such a scheme, and I don't know if it's even possible in Unity CG.

    I hope that makes the problem a little more clear. Does anyone have an idea how to approach this? Thanks!
     
  4. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    That solution you mention simply doesn't interpolate the values directly, but instead interpolates the position on the quad and then calculates the actual light value per pixel. So, you do a flat uv mapping on the face and supply the uv mapping to the pixel shader. Then l1, l2, l3 and l4 are the light intensities and pixel shader combines them:

    Code (csharp):
    1.  
    2. float l12 = lerp(l1, l2, uv.x);
    3. float l34 = lerp(l3, l4, uv.x);
    4. float result = lerp(l12, l34, uv.y);
    5.  
     
  5. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    Just generally speaking (and not really solving your specific issue) --- doesn't matter to a (vert or frag) shader where your geometry comes from, whether loaded model data or on-the-fly procedural generation. You could still apply standard per-pixel lighting to the scene and personally I'd say you'd be well advised to do so ;)
     
  6. Dayman

    Dayman

    Joined:
    May 9, 2013
    Posts:
    3
    jvo3dc - Thanks for the help, that makes sense to me, and I think I understand how to implement it, except for one thing. In the example code, you mention the four light values (l1, l2, l3, l4) are used in the fragment shader to calculate the final color value at the pixel. However, I'm still unclear as to how the shader accesses these four values in the first place. They would have to be baked into the individual vertices in code, right? If so, I haven't been able to find any resources explaining how to do that.

    I'm sorry if this is an obvious question, but I'm pretty new to shader programming and I haven't been able to find any online guides or tutorials that do anything like what I'm trying to accomplish.

    metaleap - The reason I'm using this baked per-vertex lighting model is because the game uses a flood-fill lighting algorithm that allows for semi-realistic lighting behavior such as torchlight propagation in a pitch-black mine. I agree that just using a standard lighting model would be easier, but I don't know of any way to do that and still achieve Minecraft-style lighting behavior.

    Thanks again to both of you for your help.
     
  7. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Actually, the l1-l4 values should be passed as global settings for the render. I am assuming you are rendering one quad at a time.

    If you want to render a complete model at once, you have to in effect store the l1-l4 values per quad, which is not directly possible. So instead, you'll have to store all four of the l1-l4 values into all four of the vertices of the quad.
     
  8. stulleman

    stulleman

    Joined:
    Jun 5, 2013
    Posts:
    44
    I know it's an old topic but did you finally get it to work?