Search Unity

Vertex displacement using DX9

Discussion in 'Shaders' started by crushy, Apr 22, 2014.

  1. crushy

    crushy

    Joined:
    Jan 31, 2012
    Posts:
    35
    Hello everyone. I'm facing a bit of a problem here. We're trying to deform vertices according height data stored in a texture. However, using tex2d directly in a Vertex shader doesn't work. Things like "tex2Dlod" work but they push us over the edge into DX11 territory and we're trying to remain DX9 compatible. :confused:

    I'm sure there's a way to do this as numerous engines from the 00s have done it however I'm not finding any good solutions. Does anyone have any ideas?
     
  2. mouurusai

    mouurusai

    Joined:
    Dec 2, 2011
    Posts:
    350
    As far I can remember:
    #pragma target 3.0
    #pragma glsl
     
  3. crushy

    crushy

    Joined:
    Jan 31, 2012
    Posts:
    35
    Won't setting "target 3.0" make the shader incompatible with most hardware?
     
  4. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
  5. crushy

    crushy

    Joined:
    Jan 31, 2012
    Posts:
    35
    Hmm nevermind I seem to have misread some things and assume.

    Still that does raise the question on how games like Perimeter and Startopia managed to do vertex displacement. They seem to have used height displacement maps of some sort and target Directx8.
     
  6. mouurusai

    mouurusai

    Joined:
    Dec 2, 2011
    Posts:
    350
    Why not CPU?
     
  7. crushy

    crushy

    Joined:
    Jan 31, 2012
    Posts:
    35
    Rebuilding the mesh every frame? Sounds needlessly complicated but might be a good alternative.
     
  8. Gibbonator

    Gibbonator

    Joined:
    Jul 27, 2012
    Posts:
    204
    You may be able to perform it using skinning by setting up bones and blend weights in script. I imagine Unity's skinning is SSE optimized so it may be quite quick.

    For displacement from a texture you could convert the texture contents into the vertex blend weights by sampling the texture at each vertex in script. Then setup a single bone you can move to get the displacement. Unity also lets you set the quality of skinning per renderer so you can drop it down to single bone skinning for better performance.
     
    Last edited: Apr 23, 2014
  9. RC-1290

    RC-1290

    Joined:
    Jul 2, 2012
    Posts:
    639
    Because vertex shaders already transform every vertex every frame, and they do it in parallel.
     
  10. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Not least because a GPU can handle 1920 * 1080 pixels of calculation, so it's probably more than capable of doing far more vertex manipulation than the cpu could dream of thanks to massive parallel work with shader cores. Last I checked, mine has 2880 cores vs my 8 HT cores on my cpu. Naturally the maths have got to be reasonable, but yeah.
     
  11. FuzzyQuills

    FuzzyQuills

    Joined:
    Jun 8, 2013
    Posts:
    2,871
    This might be an old post, but... I do have a way of doing vertex displacement on the CPU!
    Noting however: this won't work on any sort of mesh with shared UV coords, the whole meshes' UVs must be in 0..1 range, and must have NO HOLES! This method worked really well with a sphere-mapped mesh, if the sphere mapping is done right! ;)

    So, how my approach worked. this is done via a script, not a shader, and works really well when one needs to use vertex displacement for terrain generation with a mesh collider:
    (Specifically why I did this, as I needed to also assign the new mesh to my mesh collider!)
    • First, a gray-scale height-map is passed in as a texture.
    • A copy of the verts, normals, and tris are put into separate variables.
    • I then use a for-loop that steps through the vertices, grabbing the texture coordinates, and using that to look up one pixel in the height-map for each vert. to do this: I first calculate what pixel coord to sample from with tex2D.GetPixel(). I do this by first getting the UV coord (must be in 0..1 range!) then I multiply the texture width/hieght with teh UV coordinate to get both the X and Y positions.
    • I then sample the pixel coordinate, measure the amount of luminosity (also in 0..1 range) then add the [vertex normal * amount of luminosity * amount of displacement] to the vertex coordinate.
    • the for-loop does the last step for each vert, then, after it's done, the new mesh is re-assigned to the curret one, then the vertex normals are re-calculated.
    • After all this... the mesh is fully displaced, and ready for action.
    Issues with this approach:
    • Requires specific UVs to work properly. (No shared UVs, okay? :D)
    • Is only really reliable for one-shot displacement, for performance reasons... ;) (Example: for a game project I built for a school assignment, I called everything in function Start(), then the mesh stayed the way it was until it would exit. This works well for terrain!)
    • For finer displacement, one should use a high-poly mesh, of which might not be a valid option, depending on the platform. (One about 6k in verts should be good enough, but for larger worlds, you might need more)
    • Might look odd on anything other than a terrain or planet (Yes, I said planet! ;))
    Other than that, the approach, if used creatively, can yield really nice results.

    Regards

    -FuzzyQuills