Unity Community


Results 1 to 18 of 18

  1. Location
    Paris
    Posts
    3,730

    Something I can't understand with Alpha Blend limitations

    Hello,

    I can't understand why are we forced to use AlphaTest in order to display proper overlapping semi-transparencies on different depth values, as the Unity layer system can manage very efficiently that kind of trick with cameras...

    AlphaTest is very hungry, it just sucks up to 15 fps in some of my scenes compared to simple Blend SrcAlpha, as I proceed alpha depth renders with 2 passes, each containing an AlphaTest (as the docs explain, one above a cutoff value, one below).

    To make it short, is there a trick to properly render overlapping semi-transparency textures without AlphaTest ?


    Thank you for your attention


  2. Location
    GMA950
    Posts
    3,031
    This is simply a limitation of the way OpenGL/D3D render transparent triangles. Everything has to be drawn back-to-front, which is difficult to do if two models intersect. By default, Unity only orders transparent meshes by their origins' distance from the camera. Perfect ordering would require not only ordering every transparent triangle (usually an unreasonably processor- and draw call-intensive approach), but splitting intersecting triangles such that every fragment was drawn in the correct order.

    Even the technique at the end of the Alpha Testing page doesn't solve the problem, as it uses a combination of two approaches: a pass with ZWrite enabled that doesn't blend at all, and a pass that does alpha blending. Each part still suffers from its limitations: the ZWrite pass is not transparent, and the alpha blended pass can appear out-of-order when there is intersecting geometry.


  3. Location
    Paris
    Posts
    3,730
    Then it's hopeless ?

    Ok then, thank you Daniel


  4. Location
    GMA950
    Posts
    3,031
    It's not hopeless, it's just a difficult problem to solve in the general case. If you provide some detail about the situation in which you'd like to get geometry drawn correctly, there might be a simple solution for your case.


  5. Location
    Paris
    Posts
    3,730
    Ah, that's cool.

    So here we go :

    I have a full environment using a texture with semi or full transparent alpha portions. I just simply want these alpha portions to be rendered with the proper alpha transparency set in the PNG. Like alpha 50 would be half transparent with meshes beyond it, 100 totally opaque.

    Actually I can do it with a Shader that uses 2 passes : one for the opaque pixels, and one for the semi / fully transparent pixels. It works, but I'm forced to use AlphaTest, which consumes a lot of horsepower compared to simple "Blend SrcAlpha OneMinusSrcAlpha".

    What I'm targeting is to use one pass with no AlphaTest.

    It is targetted for iPhone, so no CG fragment is allowed

    I'm still researching a solution.
    Speaking of which, I found a way to delete one AlphaTest from the Vegetation shader in the Unity docs :

    Here is the initial code :

    Code:  
    1. Shader "Vegetation" {
    2.     Properties {
    3.         _Color ("Main Color", Color) = (.5, .5, .5, .5)
    4.         _MainTex ("Base (RGB) Alpha (A)", 2D) = "white" {}
    5.         _Cutoff ("Base Alpha cutoff", Range (0,.9)) = .5
    6.     }
    7.     SubShader {
    8.         // Set up basic lighting
    9.         Material {
    10.             Diffuse [_Color]
    11.             Ambient [_Color]
    12.         }
    13.         Lighting On
    14.  
    15.         // Render both front and back facing polygons.
    16.         Cull Off
    17.  
    18.         // first pass:
    19.         //   render any pixels that are more than [_Cutoff] opaque
    20.         Pass {
    21.             AlphaTest Greater [_Cutoff]
    22.             SetTexture [_MainTex] {
    23.                 combine texture * primary, texture
    24.             }
    25.         }
    26.  
    27.         // Second pass:
    28.         //   render in the semitransparent details.
    29.         Pass {
    30.             // Dont write to the depth buffer
    31.             ZWrite off
    32.             // Don't write pixels we have already written.
    33.             ZTest Less
    34.             // Only render pixels less or equal to the value
    35.             AlphaTest LEqual [_Cutoff]
    36.  
    37.             // Set up alpha blending
    38.             Blend SrcAlpha OneMinusSrcAlpha
    39.  
    40.             SetTexture [_MainTex] {
    41.                 combine texture * primary, texture
    42.             }
    43.         }
    44.     }
    45. }

    Here is the modified shader :

    Code:  
    1. Shader "Vegetation" {
    2.     Properties {
    3.         _Color ("Main Color", Color) = (.5, .5, .5, .5)
    4.         _MainTex ("Base (RGB) Alpha (A)", 2D) = "white" {}
    5.         _Cutoff ("Base Alpha cutoff", Range (0,.9)) = .5
    6.     }
    7.     SubShader {
    8.         // Set up basic lighting
    9.         Material {
    10.             Diffuse [_Color]
    11.             Ambient [_Color]
    12.         }
    13.         Lighting On
    14.  
    15.         // Render both front and back facing polygons.
    16.         Cull Off
    17.  
    18.         // first pass:
    19.         //   render any pixels that are more than 0.95 opaque
    20.         Pass {
    21.             AlphaTest GEqual 0.95
    22.             SetTexture [_MainTex] {
    23.                 combine texture * primary, texture
    24.             }
    25.         }
    26.  
    27.         // Second pass:
    28.         //   render in the semitransparent details.
    29.         Pass {
    30.             // Dont write to the depth buffer
    31.             ZWrite off
    32.             // Don't write pixels we have already written.
    33.             ZTest Less
    34.            
    35.  
    36.             // Set up alpha blending
    37.             Blend SrcAlpha OneMinusSrcAlpha
    38.  
    39.             SetTexture [_MainTex] {
    40.                 combine texture * primary, texture
    41.             }
    42.         }
    43.     }
    44. }

    By setting the first pass AlphaTest to a high value let us remove the 2nd pass one.

    At least it works for me. And it saves a bit horsepower.


  6. Location
    GMA950
    Posts
    3,031
    I'm more curious about the geometry in question. Without knowing what your triangles look like, it's hard to suggest a shader-based solution.


  7. Location
    Paris
    Posts
    3,730
    Ah ok, the geometry is various, it can be simple quads to hemispheres. Textures are placed on them, with semi-transparent parts.
    Unfortunately I'm limited in term of triangles budget, and cannot change this geometry.

    Plus some textures are unreproducanle with meshes, because too complex, like dozens of humans, rain or destroyed buildings.

    I will post a screenshot this noon to be the most concise possible

    Thank you again


  8. Location
    Paris
    Posts
    3,730
    Well, a simple example that would be more explicit than a screenshot :

    1) a quad with a circle PNG texture on it. What is not inside the circle is alpha zero.
    2) a cube with another texture on it. No alpha (we don't need it here for the example).
    3) quad is in front of the cube.

    I would just want the final render to display a circle in front of a cube :

    a) without alpha testing
    b) possibly in one single pass.


    _____________

    From that apart, using this example, even if I understand the hardware limitations you specified above, I just can't understand yet why would it be impossible for lighting buffer ("primary" combiner in the texture block) to be faded by the texture's alpha.

    Can't we hack that lighting basic render at all, like we can modify the texture's one ?

    It would just boost by x1.5 (at least) every semi-transparency render ... And which game doesn't use semi-transparency nowadays ? It is so a primary feature in graphism, I can't understand why it's so complicated to properly do it

    edit : Found this article interesting.
    Don't know if Unity takes that front-to-back uselessness in account.


    edit 2 : finally, another article that confirms we shouldn't use AlphaTest.

    Avoid Alpha Test and Discard

    If your application uses an alpha test in OpenGL ES 1.1 or the discard instruction in an OpenGL ES 2.0 fragment shader, some hardware depth-buffer optimizations must be disabled. In particular, this may require a fragment’s color to be calculated completely before being discarded.

    An alternative to using alpha test or discard to kill pixels is to use alpha blending with alpha forced to zero. This can be implemented by looking up an alpha value in a texture. This effectively eliminates any contribution to the framebuffer color while retaining the Z-buffer optimizations. This does change the value stored in the depth buffer.

    If you need to use alpha testing or a discard instruction, you should draw these objects separately in the scene after processing any geometry that does not require it. Place the discard instruction early in the fragment shader to avoid performing calculations whose results are unused.
    This truly means we can replace AlphaTest by Blend, with the same result.


  9. Location
    GMA950
    Posts
    3,031
    Quote Originally Posted by n0mad
    Well, a simple example that would be more explicit than a screenshot :

    1) a quad with a circle PNG texture on it. What is not inside the circle is alpha zero.
    2) a cube with another texture on it. No alpha (we don't need it here for the example).
    3) quad is in front of the cube.
    Is your geometry actually intersecting? If not, you might just need to use Material.renderQueue to force the drawing order of your objects.
    This truly means we can replace AlphaTest by Blend, with the same result.
    The advantage that alpha test has is that it won't write anything if the test fails. If you're using the Z buffer for sorting, alpha testing will look right. Alpha blending will write to the Z buffer for every fragment, meaning that even transparent pixels will stop geometry behind them from being rendered later.


  10. Location
    Paris
    Posts
    3,730
    Geometry is not intersecting. Material.renderQueue would be an awesome solution, but on the iPhone we have to use the smallest possible amount of them. For example, a whole level will often have only one UV mapped material, making this solution uneffective.

    But I keep your advice at warm, could be really useful for certain situations

    I'm continuing my researches about Apple's Blending recommandation.


  11. Location
    Paris
    Posts
    3,730
    From the OpenGL FAQ :

    Quote Originally Posted by Khronos
    15.080 How can I make part of my texture maps transparent or translucent?
    It depends on the effect you're trying to achieve.
    If you want blending to occur after the texture has been applied, then use the OpenGL blending feature. Try this:
    glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE);
    You might want to use the alpha values that result from texture mapping in the blend function. If so, (GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA) is always a good function to start with.
    However, if you want blending to occur when the primitive is texture mapped (i.e., you want parts of the texture map to allow the underlying color of the primitive to show through), then don't use OpenGL blending. Instead, you'd use glTexEnv(), and set the texture environment mode to GL_BLEND. In this case, you'd want to leave the texture environment color to its default value of (0,0,0,0).
    Underlined the interesting part.

    Where could we find that glTexEnv() in Unity Shader language ref ?

    This would be the exact perfect solution.


  12. Location
    Paris
    Posts
    3,730
    After a small amount of research, I found that polygon depth sorting is not possible (thing that wasn't specified in OpenGL & Apple overground docs).

    So in order to save performance, it would be better to use one pass, no Z writing, "queue"="Transparent", and detach all the translucent polygons apart, making them concave instead of convex, to avoid bad depths based on their object container (a huge cube containing a camera and another object would be displayed behind this object, for example).

    I will keep the thread updated with any performance delta between this method and classic 2-pass AlphaTest.


    Now another question, but that doesn't have to do with shader anymore (lol there should be a "Performance Tweakings" forum) :

    Would it be even faster to split those translucent polygons into separate objects, to activate Dynamic Batching ?
    (considering Dynamic batching was not activated before)


  13. Location
    Rio de Janeiro, Brazil
    Posts
    1,021
    Quote Originally Posted by n0mad View Post
    After a small amount of research, I found that polygon depth sorting is not possible (thing that wasn't specified in OpenGL & Apple overground docs).

    So in order to save performance, it would be better to use one pass, no Z writing, "queue"="Transparent", and detach all the translucent polygons apart, making them concave instead of convex, to avoid bad depths based on their object container (a huge cube containing a camera and another object would be displayed behind this object, for example).

    I will keep the thread updated with any performance delta between this method and classic 2-pass AlphaTest.


    Now another question, but that doesn't have to do with shader anymore (lol there should be a "Performance Tweakings" forum) :

    Would it be even faster to split those translucent polygons into separate objects, to activate Dynamic Batching ?
    (considering Dynamic batching was not activated before)
    Hi there!
    I've been having the same hard time in my project Aff
    Could you elaborate a bit about:
    "
    So in order to save performance, it would be better to use one pass, no Z writing, "queue"="Transparent", and detach all the translucent polygons apart, making them concave instead of convex, to avoid bad depths based on their object container (a huge cube containing a camera and another object would be displayed behind this object, for example).
    "

    Thanks a lot =)


  14. Location
    Paris
    Posts
    3,730
    Hi

    This is a very old topic ! Right now I don't have anymore Z sorting problems, as I'm using Surface shaders instead of Shaderlab. They seem to manage far better on that part (plus years of Unity engine improvements, btw). If you're still experiencing Z fighting, try to use a builtin shader, and avoid overlapping transparent objects (or put them on different Z depthes).


  15. Location
    Rio de Janeiro, Brazil
    Posts
    1,021
    Quote Originally Posted by n0mad View Post
    Hi

    This is a very old topic ! Right now I don't have anymore Z sorting problems, as I'm using Surface shaders instead of Shaderlab. They seem to manage far better on that part (plus years of Unity engine improvements, btw). If you're still experiencing Z fighting, try to use a builtin shader, and avoid overlapping transparent objects (or put them on different Z depthes).
    Hey, good to know. I'm still having some trouble with that... Your entire hair is one mesh, or you've separated each module to get a batch? That might also help ordering.
    I've even tried changing the index ordering of vertices, but its quite hard to manage in complex mesh.

    Thanks a lot for your answer.

  16. Super Moderator
    Location
    Great Britain
    Posts
    9,675
    its also beneficial to split your mesh up into smaller parts if you don't want to fiddle too much as the origin point of the mesh is used for sorting transparency, so obviously big things will glitch. Splitting them up or using a clever design is an acceptable compromise in a lot of cases.
    Currently working with Sony on our new
    PS4 and Vita game in Unity!

    This post is not representative of Simian Squared Ltd


  17. Location
    Rio de Janeiro, Brazil
    Posts
    1,021
    Quote Originally Posted by hippocoder View Post
    its also beneficial to split your mesh up into smaller parts if you don't want to fiddle too much as the origin point of the mesh is used for sorting transparency, so obviously big things will glitch. Splitting them up or using a clever design is an acceptable compromise in a lot of cases.
    Yep, doing some tests right now
    Do you guys can confirm if hair mesh bounding box has also anything to do with z depth calculation? I've read somewhere that people were including far vertices to get a bigger bounding volume.

    Thanks a lot for the info!


  18. Location
    Rio de Janeiro, Brazil
    Posts
    1,021
    Based on my last test, it looks like the ordering isn't done based on mesh pivot, but on the bounding volume center.
    Hope this helps more people

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •