Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

what does the function ComputeScreenPos ( in unitycg.cginc) do ?

Discussion in 'Shaders' started by JohnSonLi, Jan 30, 2015.

  1. JohnSonLi

    JohnSonLi

    Joined:
    Apr 15, 2012
    Posts:
    586
    Code (csharp):
    1.  
    2. #define V2F_SCREEN_TYPE float4
    3. inline float4 ComputeScreenPos (float4 pos) {
    4.   float4 o = pos * 0.5f; //why myltiply by .5f
    5.   #if defined(UNITY_HALF_TEXEL_OFFSET)
    6.   o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w * _ScreenParams.zw;
    7.   #else
    8.   o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w;
    9.   #endif
    10.  
    11.   #if defined(SHADER_API_FLASH)
    12.   o.xy *= unity_NPOTScale.xy;
    13.   #endif
    14.  
    15.   o.zw = pos.zw;
    16.   return o;
    17. }
    18.  
    this is from a sample shader in angrybots. RealtimeReflectionInWaterFlow.shader
     
    dog_funtom likes this.
  2. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Given a position in projection/camera space (I think - essentially o.pos in the vertex shader) it returns the position of that point on the screen.

    With the bottom left being (0,0) and the top right being (1,1).
     
  3. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    I was actually using ComputeScreenPos just today and in my case the result was 10 times as high. So bottom left being (0,0) and top right being (10,10). I'm pretty sure that's not the intended result, but that's what happened.

    So my code now actually looks something like this:
    Code (csharp):
    1.  
    2. struct v2f {
    3.    float4 pos_clip : SV_POSITION;
    4.    float2 uv0 : TEXCOORD0;
    5. };
    6.  
    7. v2f vert(appdata_base v) {
    8.    v2f o;
    9.    o.pos_clip = mul(UNITY_MATRIX_MVP, v.vertex);
    10.    o.uv0 = ComputeScreenPos(o.pos_clip) / 10.0;
    11.    return o;
    12. }
    13.  
    14. float4 frag(v2f i) : COLOR {
    15.    float4 input1 = tex2D(_MainTex, i.uv0);
    16.  
    And that maps _MainTex to the screen perfectly.
     
  4. Glurth

    Glurth

    Joined:
    Dec 29, 2014
    Posts:
    109
    C:\Program Files\Unity\Editor\Data\CGIncludes

    That folder contains the unitycg.cginc file.

    You can open up the file in any text editor to see the actual code that gets compiled into your shader.
    for ComputeScreenPos I see:
    Code (CSharp):
    1. inline float4 ComputeScreenPos (float4 pos) {
    2.     float4 o = pos * 0.5f;
    3.     #if defined(UNITY_HALF_TEXEL_OFFSET)
    4.     o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w * _ScreenParams.zw;
    5.     #else
    6.     o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w;
    7.     #endif
    8.  
    9.     o.zw = pos.zw;
    10.     return o;
    11. }
    by the way... note there is no division by w ( to normalize the XYZ coordinates). I do this myself before passing in the parameter e.g. pos/=pos.w; (Perhaps your W happens to be 10, jvo3dc?) Full discolsure, I'm not sure if this is correct! I found this post trying to confirm.
    Also note there is no use of _ScreenParams.x or .y, so it doesn't seem to be outputing pixel coordinates.
     
    Last edited: Aug 12, 2016
    CauseMoss and AndreiMarian like this.
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    You should be dividing .xy / .w in the pixel shader for anything that's not perfectly parallel to the camera plane (pretty much anything that isn't an image effect). Doing the divide in the vertex shader will cause warping.
     
    cecarlsen, frabuondi, shegway and 3 others like this.
  6. Glurth

    Glurth

    Joined:
    Dec 29, 2014
    Posts:
    109
    Oh! so I should NOT divide .zw by .w? (I thought I was supposed to because it yields a w=1, and thus "normalized coordinates")
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    To be clear, the proper use is:
    Code (CSharp):
    1. // vertex shader
    2. o.pos = UnityObjectToClipPos(v.vertex.xyz);
    3. o.screenPos = ComputeScreenPos(o.pos); // using the UnityCG.cginc version unmodified
    4.  
    5. // fragment shader
    6. float2 screenUV = i.screenPos.xy / i.screenPos.w;
    You can do the screenPos.xy / screenPos.w in the vertex shader in the case of image effects, or anything perfectly flat and not angled away from the camera at all, and would likely solve @jvo3dc 's issue in a generic way.
     
  8. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Wow, I made a beginner mistake there. I probably figured it would do the w divide for me, but that is obviously not possible in the vertex shader. I don't think my w "happens" to be 10, I'm willing to bet on it ;-)

    It was probably the name ComputeScreenPos that misled me there. It would be friendlier to call it ComputeScreenPosVert and then add a ComputeScreenPosFragment that does the w divide.
     
  9. Glurth

    Glurth

    Joined:
    Dec 29, 2014
    Posts:
    109
    @jvo3dc agreed, a better name would certainly help. Specifying what the output ACTUALLY IS, in the documentation, would also be useful. Alas, no such luck.
     
  10. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    ComputeScreenPos() will not divide input's xy by w.
    because ComputeScreenPos() expect you sample the texture in fragment shader using tex2Dproj(float4).
    tex2Dproj() is similar to tex2D(), it just divide input's xy by w in hardware before sampling, which in much faster than user code division in fragment shader(result always correct but slow), or vertex shader(result will not correct if polygon not facing directly to camera).

    ComputeScreenPos() will just transform input from clip coordinate vertex position [-w,w] into [0,w]
    then calling tex2DProj() will transform [0,w] into [0,1], which is a valid texture sampling value.

    -----------------------------
    @bgolus point out that tex2Dproj() will not help performance, as it is wrapper in most case. I do not have enough knowledge to tell if it is right/wrong. so I will put this note here.
    from my experience, hlsl compile to glsl for gles2 is not a wrapper, while gles3 is.
    the compiled glsl code is inside the replies below
     
    Last edited: Aug 16, 2016
  11. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    if anyone is confused by coordinates, here is a list showing what is inside a vertex position at different stage 13765939_10153839732515897_1876395612751424638_o.jpg

    *the above image is for directx, in opengl the NDC space's z range is different
    -NDC space z range for directx is [0,1]
    -NDC space z range for opengl is [-1,1]

    *the actual clipping happens in clip space but not NDC space(after MVP, but before w division), the reason for this, is for reducing the number of division(expensive) in hardware.Which means whenever the hardware is doing "w division" for a vertex(clip space->NDC space), that vertex MUST be visible(so the hardware will never do useless w division).
     
    Last edited: Jul 7, 2019
  12. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    Code (CSharp):
    1. //example code of ComputeScreenPos()'s usage
    2. //remember to sample texture using tex2Dproj(), not regular tex2D() in fragment shader
    3.  
    4. //in vertex shader
    5. o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); //o.vertex.xy is [-w,w]
    6. //or o.vertex = UnityObjectToClipPos(v.vertex.xyz); //which is the same
    7.  
    8. o.uv = ComputeScreenPos(o.vertex); //o.uv.xy is [0,w]
    9.  
    10. /////////////////////////////////////////////////////////////////
    11. //in fragment shader
    12. //tex2Dproj will remap from [0,w] to [0/w,w/w] = [0,1] before sample
    13. //which, [0,1], is a valid uv value
    14. fixed4 col = tex2Dproj(_MainScreenRT, i.uv);
     
    Last edited: Aug 16, 2016
  13. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    ComputScreenPos() just remap [-w,w] to [0,w], it do not do any magic
    Code (CSharp):
    1. //float4 pos, the input of ComputeScreenPos(), is [-w,w]
    2. //usually we input the result of MVP transform directly, just like the reply above
    3. inline float4 ComputeScreenPos (float4 pos) {
    4.     float4 o = pos * 0.5f; //now o.xy is [-0.5w,0.5w], and o.w is half of pos.w also
    5.  
    6.     //UNITY_HALF_TEXEL_OFFSET is only for DirectX9, which is quite old in 2016, still Unity
    7.     //will support it
    8.     #if defined(UNITY_HALF_TEXEL_OFFSET)
    9.     o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w * _ScreenParams.zw;
    10.     #else
    11.  
    12.     o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w;
    13.     //now result o.xy is [-0.5w + 0.5w,0.5w + 0.5w] = [0,w]
    14.     //opengl & directx have different conventions of clip space y(start from top/start from bottom)
    15.     //o.y*_ProjectionParams.x will make it behave the same in different platform
    16.     //otherwise you will see the sampled texture flipped upsidedown
    17.     #endif
    18.     o.zw = pos.zw; //must keep the w, for tex2Dproj() to use
    19.     return o;
    20. }
     
    Last edited: Aug 16, 2016
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    One minor nitpick.
    They're actually identical. The tex2Dproj function isn't implemented in hardware, at least not anymore if it ever was. DX11 doesn't even have an analog to tex2Dproj(), and the tex2Dproj and textureProj in DX9 / OpenGL are just wrappers for tex2D(_tex, uv.xy / uv.w).

    Unity even has code for converting tex2Dproj into tex2D calls directly for consoles.

    Here's the compiled DX11 pixel shader for using tex2D(_Tex, uv.xy / uv.w);
    0: div r0.xy, v0.xyxx, v0.wwww
    1: sample o0.xyzw, r0.xyxx, t0.xyzw, s0
    2: ret


    And here's the compiled DX11 pixel shader using tex2Dproj(_Tex, uv.xyzw);
    0: div r0.xy, v0.xyxx, v0.wwww
    1: sample o0.xyzw, r0.xyxx, t0.xyzw, s0
    2: ret


    That said it's not bad to use them as it reduces user error.
     
    dog_funtom and shegway like this.
  15. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    you are right!
    My target platform is opengles 2.0 & 3.0, I still use tex2Dproj() because I see it can help me avoid dependent texture read on gles2. I assume tex2Dproj() will cause no harm in other platforms but benefits gles2, so my reply above said tex2Dproj() is better. (I do not have any prove, please correct me if it is wrong)

    Code (CSharp):
    1. //hlsl compile to glsl for gles 2.0
    2. //(not sure if it is wrapper, but this function use the texcoord directly,
    3. //it should not trigger any dependent texture read, which is slow)
    4. tmpvar_1 = texture2DProj (_MainScreenRT, xlv_TEXCOORD0);
    5.  
    6. //hlsl compile to glsl for gles 3.0 (already act as a wrapper)
    7. t0.xy = vs_TEXCOORD0.xy / vs_TEXCOORD0.ww;
    8. t10_0 = texture(_MainScreenRT, t0.xy);
     
    Last edited: Aug 16, 2016
    bgolus likes this.
  16. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    I didn't expect it to do much magic, but considering it's called ComputeScreenPos I expected something in the 0 to 1 range. I know how it works, so if I would have paid some more attention I could have known it would be in the 0 to w range. Still, call it ComputeProjectiveUV() then.

    I moved to doing the perspective divide myself and using tex2D years ago. But that is for desktop development where I assume SM 3.0 support. For mobile SM 2.0 use it can't hurt to use tex2Dproj to potentially prevent a dependent texture read. I think it was indeed implemented in hardware for desktop too at one time, so then it doesn't come as a surprise it is implemented in hardware on mobile now.
     
    colin299 likes this.
  17. bugsbun

    bugsbun

    Joined:
    Jun 26, 2017
    Posts:
    27
    Now in the newsest version of unity, are the windows coordinates normalized ? or I am reffering in the wrong way to get the depth of a pixel at pixel position(550,550):

    Code (CSharp):
    1.  
    2. //...inside fragment shader
    3. float d1 //depth
    4. float n1 //normal    
    5.         DecodeDepthNormal(tex2D(_CameraDepthNormalTexture,float2(550/1920,550/1080)), d1, n1);
     
  18. Deleted User

    Deleted User

    Guest

    I didn't read more so it's been said probably, but you'll need to convert the vertex position from local space to world space first. I think.

    Edit: I was wrong. But I was less wrong than I used to be, so I almost knew something.

    Toot toot
     
    TorbenDK likes this.
  19. unity_Qei8CQ3D-zymZg

    unity_Qei8CQ3D-zymZg

    Joined:
    Jun 20, 2019
    Posts:
    8
    This.... this should be in the Unity Manual front and center.
     
    riveranb and vexe like this.
  20. Lynxed

    Lynxed

    Joined:
    Dec 9, 2012
    Posts:
    121
    I wanted to use this to detect if the whole object is near the screen border with this:

    Code (CSharp):
    1.   v2f vert (appdata v)
    2.             {
    3.                 v2f o;
    4.                 o.vertex = UnityObjectToClipPos(v.vertex);
    5.                 float4 objectClipPos = UnityObjectToClipPos(float4(0,0,0,1));
    6.                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
    7.                 o.color = v.color;
    8.                                
    9.                 o.screenPos = ComputeScreenPos(objectClipPos);              
    10.                 return o;
    11.             }
    12.            
    13.             fixed4 frag (v2f i) : SV_Target
    14.             {
    15.                
    16.                 float2 screenPos = i.screenPos.xy / i.screenPos.w;
    17.                
    18.                 float a = tex2D(_MainTex, i.uv).a;
    19.                 fixed4 col;
    20.                 col = screenPos.x;
    21.                 return col;
    22.             }
    But it does not. The object is a plane, that always faces the camera (a billboard with a flare texture on it). The goal is to make the whole thing disappear the closer it to screen borders.
    How one would detect this? Thank you!
     

    Attached Files:

  21. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    Code (CSharp):
    1.  
    2. v2f vert (appdata v)
    3.             {
    4.                 v2f o;
    5.                 o.vertex = UnityObjectToClipPos(v.vertex);
    6.    
    7.                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
    8.                 o.color = v.color;
    9.                              
    10.                 o.screenPos = ComputeScreenPos(o.vertex);  //pass clip pos to this function            
    11.                 return o;
    12.             }
    13.            
    14.             fixed4 frag (v2f i) : SV_Target
    15.             {
    16.              
    17.                 float2 screenPos = i.screenPos.xy / i.screenPos.w; //[0,1] uv
    18.                 screenPos -= 0.5; //convert to [-0.5,0.5]
    19.                 screenPos *= 2; //convert to [-1,1]
    20.                 float xDistanceFromCenter = abs(screenPos .x);
    21.                 float yDistanceFromCenter = abs(screenPos .y);
    22.  
    23.                 float border = smoothstep(0.9,1,xDistanceFromCenter) * smoothstep(0.9,1,yDistanceFromCenter);
    24.                 return border ;
    25.             }
    26.  
    I didn't test the code, but you can try
     
  22. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    You're calculating the screen position of each individual pixel rendered, with
    screenPos.x
    being the normalized (0.0 to 1.0) horizontal position. Basically that's calculating how close to the left side of the screen the current pixel is, regardless of the rest of the mesh.

    Each pixel only knows about itself. In the vertex shader it only knows about individual vertices.

    If you want to fade out the whole object you'd need to somehow know data about the whole mesh. Passing in the size of the mesh's bounds to the shader and calculate the coverage based on that data rather than the current vertex or fragment would work.

    That just changes the code to work exactly as before, but is getting roughly how close each pixel is to any edge. It still won't hide the entire mesh if any part gets close, which is what I think you want.
     
  23. Lynxed

    Lynxed

    Joined:
    Dec 9, 2012
    Posts:
    121
    yeah, i thought float4 objectClipPos = UnityObjectToClipPos(float4(0,0,0,1)); would do the trick, passing the object's origin and converting it to clip space, so i can use it in any pixel. I wanted to fade the mesh only when it's "origin" is close to the border no matter the size.
     
  24. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Oh, whoops, yes, I missed that part of the code. Yes, using
    UnityObjectToClipPos(float4(0,0,0,1))
    should get you the screen space position of the pivot. In that case @colin299's example code (at least for the fragment shader) would actually work. However it's somewhat unnecessary to do all the code in the fragment shader since you want the value to be stable across the entire mesh anyway. Try something like this:
    Code (csharp):
    1. float4 objectClipPos = UnityObjectToClipPos(float4(0,0,0,1));
    2.  
    3. // -1 to 1 range edge to edge
    4. float2 objScreenPos = objectClipPos.xy / objectClipPos.w;
    5.  
    6. // 0 to 1 range with 0 at edge, 1 at center
    7. float2 normDistToEdge = 1 - abs(objScreenPos);
    8.  
    9. // get min distance, mul > 1 to make fade happen closer to edge
    10. float fade = saturate(min(normDistToEdge.x, normDistToEdge.y) * _FadeMul);
    11.  
    12. // multiply out color by fade, or however else you want to pass the info to the fragment
    13. o.color.a *= fade;
    However this assumes the object isn't being batched. If you have more than one of these planes on screen there's a good chance that Unity will dynamically batch them, which means combining those meshes into a single mesh pre-transformed into world space, in which case the data about each individual quad's pivot will be lost. If these are hand placed quads, you can add
    "DisableBatching"="True"
    to the
    SubShader
    tags. If this is being used on a particle system you'll need to use the custom vertex streams to pass the particle position encoded in the mesh's TEXCOORDs, and use that position rather than 0,0,0 as the "local pivot". If this is a sprite or UI element then it won't work at all and you'll need to set the pivot value from a c# script, at which point just modifying the component's color directly will probably be faster.
     
    Lynxed and colin299 like this.
  25. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    "The goal is to make the whole thing disappear the closer it to screen borders "
    Oh, I didn't read the question carefully, sorry about that.

    In this case, you can calculate everything in the vertex shader, and pass a single float / mul result to any color.a, send it to fragment shader, then do the alpha blending/dither.

    use @bgolus 's answer will do the job.
     
  26. Lynxed

    Lynxed

    Joined:
    Dec 9, 2012
    Posts:
    121
    Thank you for very detailed explanation! I'll try this out!
     
  27. veluri

    veluri

    Joined:
    Feb 21, 2018
    Posts:
    5
    @colin299 & @bgolus . Thanks for amazing explanation. i was struggling from a couple of days to understand whats in ScreenPos.w.
     
  28. YakShaver_dc

    YakShaver_dc

    Joined:
    Mar 27, 2019
    Posts:
    29
  29. WayneJP

    WayneJP

    Joined:
    Jun 28, 2019
    Posts:
    44
    Thanks for amazing explanation.
    I am learning about screen space texture and find the same code as above.
    So if I understand right, this code use (screenPos / w) as texture coordinates to sample texture, which is like put the whole texture on the screen instead model. At this point, model just like a "Mask". What color will render on fragment depend on where screen position the vertex is at.
     

    Attached Files:

    Last edited: Dec 4, 2022
  30. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    yes, you can try putting a sphere close to the camera, that "full-screen sphere" should show the same result as a full-screen quad, since the mesh is used like a mask.
     
    WayneJP likes this.