I am trying to create a post processing rim highlight image effect using the normals produced by a camera's DepthTextureMode.DepthNormals setting. The camera is a temporary camera created at runtime, and only renders one layer - the highlight layer. Objects to be highlighted are added to this layer. I thought I would simply do a dot product of these normals and a (0,0,1) vector, and I would be home free, but the thing is, these normals are in view space and they rotate along with the camera, thus shifting the rim highlight as the camera rotates. Therefore things look right only when the objects to be highlighted are in the middle of the screen. Here's a screenshot: . Here are trimmed down versions of the scripts: Code (CSharp): // HighlightPostEffect.cs [RequireComponent(typeof(Camera))] public class HighlightPostEffect : MonoBehaviour { public Color highlightColor = Color.blue; private Camera attachedCamera; private Camera tempCam; private int layerMask = -1; private Material material; void Start() { layerMask = LayerMask.GetMask("Highlight"); attachedCamera = GetComponent<Camera>(); tempCam = new GameObject("Temp Camera").AddComponent<Camera>(); tempCam.transform.SetParent(transform, false); tempCam.CopyFrom(attachedCamera); tempCam.cullingMask = layerMask; tempCam.depthTextureMode = DepthTextureMode.DepthNormals; tempCam.enabled = false; material = new Material(Shader.Find("Hidden/HighlightEffectShader")); } void OnRenderImage(RenderTexture source, RenderTexture destination) { tempCam.Render(); material.color = highlightColor; Graphics.Blit(source, destination, material); } } // HighlightEffectShader.shader Shader "Hidden/HighlightEffectShader" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always Lighting Off Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; }; v2f vert (appdata v) { v2f o; o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); o.uv = v.uv; return o; } sampler2D _MainTex; fixed4 _Color; sampler2D _LastCameraDepthNormalsTexture; fixed4 frag (v2f i) : SV_Target { fixed4 imgCol = tex2D(_MainTex, i.uv); float4 tempNorms = tex2D(_LastCameraDepthNormalsTexture, i.uv); float3 normals = DecodeViewNormalStereo(tempNorms); float rim = pow(1 - dot(float3(0, 0, 1), normals), 1.5); return lerp(imgCol, _Color, rim); } } } } My question is: Do I have to convert the normals from view space to some other space? If so, how? If not, what do I have to do in order to make the rim highlight always centered? Note: I already made a version that correctly calculates the rim highlight from a second shader that the temp cam renders with, using the regular method (dot product of camera direction and the mesh normals), but that way doesn't work with meshes that have their vertices animated by their shader e.g. swaying vegetation.
You need the dot product of the normalized view space position of the pixel and the view normal. In your vertex shader calculate the view space position, and pass that to the fragment shader. float3 viewPos : TEXCOORD1; ... o.viewPos = mul(UNITY_MATRIX_MV, v.vertex).xyz; ... dot(normalize(i.viewPos), normals)
Didn't work. This is the result: Sader code modifications: Code (CSharp): struct v2f { float2 uv : TEXCOORD0; float3 viewPos : TEXCOORD1; float4 vertex : SV_POSITION; }; v2f vert (appdata v) { v2f o; o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); o.uv = v.uv; o.viewPos = mul(UNITY_MATRIX_MV, v.vertex).xyz; return o; } sampler2D _MainTex; fixed4 _Color; sampler2D _LastCameraDepthNormalsTexture; fixed4 frag (v2f i) : SV_Target { fixed4 imgCol = tex2D(_MainTex, i.uv); float4 tempNorms = tex2D(_LastCameraDepthNormalsTexture, i.uv); float depth; float3 normals; DecodeDepthNormal(tempNorms, depth, normals); float mask = ceil(1-depth); float rim = pow(1 - dot(normalize(i.viewPos), normals), 1.5); return lerp(imgCol, _Color, rim * mask); }
Not sure how this will work out, but try this: Code (CSharp): struct v2f { float2 uv : TEXCOORD0; float3 viewPos : TEXCOORD1; float4 vertex : SV_POSITION; }; v2f vert (appdata v) { v2f o; o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); o.uv = v.uv; o.viewPos = WorldSpaceViewDir(v.vertex); return o; } sampler2D _MainTex; fixed4 _Color; sampler2D _LastCameraDepthNormalsTexture; fixed4 frag (v2f i) : SV_Target { fixed4 imgCol = tex2D(_MainTex, i.uv); float4 tempNorms = tex2D(_LastCameraDepthNormalsTexture, i.uv); float depth; float3 normals; DecodeDepthNormal(tempNorms, depth, normals); float mask = ceil(1-depth); float rim = pow(1 - dot(normalize(i.viewPos), normals), 1.5); return lerp(imgCol, _Color, rim * mask); } Basically, use world space view direction. Since world space vectors change in view space, this may correct it. (Normally, if you were using object's actual normals instead of view-space ones, ObjSpaceViewDir would work) EDIT: if you get the opposite effect, aka. it looks like a light is shining from the camera, just invert the result.
Ah, because it's an image effect, which has some weird extra stuff you have to do. Try: o.viewPos *= float3(1,-1,-1);
I've figured out a way that kinda works. It's not perfect, but it's mush better than before. The trick was to calculate the view direction's xy coords based on the uv's remapped to a (-1..1) range. Screenshots: Here's the code: Code (CSharp): fixed4 frag (v2f i) : SV_Target { fixed4 imgCol = tex2D(_MainTex, i.uv); float4 tempNorms = tex2D(_LastCameraDepthNormalsTexture, i.uv); float depth; float3 normals; DecodeDepthNormal(tempNorms, depth, normals); float mask = ceil(1-depth); // remap x,y uv coords from 0..1 to -1..1 float3 viewDir = -normalize(float3((i.uv * 2 - 1), -1)); float rim = pow(1 - dot(viewDir, normals), 1.5); return lerp(imgCol, _Color, rim * mask); } If anyone has a better way to do this, feel free to share.
That's a good approximation if your fov is close to 90 degrees (in fact it's exactly the same at the center horizontal line). I'm a little confused as to why it's not working for you though since that would be exactly what Unity's own shaders do. With the last modification I suggested, when you say "that didn't work", was it just a solid white, or a similar sideways angle?
It was a similar sideways angle, but rotated 90 deg and with a little more white. I might be wrong, but I think the reason your method didn't work is because screen effect shaders are essentially shaders for flat 2D planes, with the origin in the bottom left. So v.vertex is actually a 2D vector (in object space) where the z component = 0, e.g (x,y,0,w). So if I use that in my dot product, that will always be a vector that is parallel to the screen (perpendicular to the camera forward axis). What I needed was a vector pointing out of the screen to compare with the normals vectors, which also point out of the screen. Multiplying o.vertex by (1,-1-1) only seemed to mirror that point vertically.
Your reasoning is both right and wrong. The problem isn't that it's a 2D plane, rendering on a GPU is still fundamentally in 3D. The real reason is something I hadn't realized about how Unity constructs its image effect view and projection matrices. Basically during an image effect pass all of the "extra" matrices usually around for converting things from world or model space into projection space aren't really "useful" in the same way. Essentially they just have enough data to get a known quad to cover the camera's view and no more. The result is the "view position" as calculated from the UNITY_MATRIX_MV (or any of the built in functions for getting view or world position) will basically be the same as the uvs. In effect, they are "2D". Luckily there is still a matrix that we can use to get a valid view projection, unity_CameraProjection. Credit should go mainly to Keijiro Takahashi (of Unity Japan) for this solution. Code (CSharp): Shader "Hidden/ViewNormalsImageEffectShader" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always Pass { CGPROGRAM #pragma vertex vert_img #pragma fragment frag #include "UnityCG.cginc" sampler2D _MainTex; sampler2D _CameraDepthNormalsTexture; fixed4 frag (v2f_img i) : SV_Target { fixed4 packedDepthNormals = tex2D(_CameraDepthNormalsTexture, i.uv); float depth; float3 normals; DecodeDepthNormal(packedDepthNormals, depth, normals); // get the perspective projection float2 p11_22 = float2(unity_CameraProjection._11, unity_CameraProjection._22); // conver the uvs into view space by "undoing" projection float3 viewDir = -normalize(float3((i.uv * 2 - 1) / p11_22, -1)); float fresnel = 1.0 - dot(viewDir.xyz, normals); return float4(fresnel, fresnel, fresnel, 1); } ENDCG } } }