Search Unity

Multiple shader issue

Discussion in 'Shaders' started by 1400883, Nov 23, 2015.

  1. 1400883

    1400883

    Joined:
    Nov 23, 2015
    Posts:
    4
    I'm very much a beginner with Unity, so please bear with me.

    I have an image I need to apply a number of shaders (or shader passes) to. For instance, I'm looking for a way to implement a canny edge detection that requires one processing stage output to be used as input to another stage. I'm also planning to do a feature where you could click on a final image to create and display colored shape outlines over the image. AFAIK, this requires yet another layer of shader use to reach maximum efficiency.

    Shaders will be written in GLSL. I'm quite confused over what would be the preferred way to do this. I've searched for info on how to write GLSL multipass shaders in Unity but there seem to be practically no examples available. This thread from two years back kind of implies chaining shaders is, or at least was, not directly possible. A few discussions that came up suggested using several rendertextures and cameras to do step-by-step processing. I'm baffled. If I have, say 5 stages of processing to do, how should I proceed?
     
  2. StevenGerrard

    StevenGerrard

    Joined:
    Jun 1, 2015
    Posts:
    97
    => First I will suggest you use Cg instead of GLSL.
    => It is very easy to implement "post process chain" in unity. You just need prepare several render targets ( do not need several cameras )
    => Then in "OnRenderImage" callback camera script, you do something like this:
    Graphics.Blit (source, tempRT1, yourMat, yourPass);
    Graphics.Blit (tempRT1, tempRT2, yourMat, yourPass);
    Graphics.Blit (tempRT2, tempRT1, yourMat, yourPass);
    ...
    Graphics.Blit (tempRT1, destination, yourMat, yourPass);
     
    AcidArrow likes this.
  3. 1400883

    1400883

    Joined:
    Nov 23, 2015
    Posts:
    4
    I knew I forgot to add something: The reason for using GLSL is that the project will eventually be built into a WebGL version. I'll look into Graphics.Blit(), seems promising.
     
  4. 1400883

    1400883

    Joined:
    Nov 23, 2015
    Posts:
    4
    I'm having trouble getting Blit to work. I'm sure it's just my misunderstading.

    Here's the example setting:
    I've got a source plane in the scene. Plane's material has the raw source image. A camera is looking at the plane. I've got a following code in the script attached in the camera.

    Code (CSharp):
    1.   public Material m;
    2.   public RenderTexture rt;
    3.   void OnRenderImage(RenderTexture src, RenderTexture dest) {
    4.     Graphics.Blit(src, rt, m);
    5.   }
    A shader is attached to the material. Now that I play the project, I get some feedback from the shader (for instance, brightness setting), which tells me the shader is being applied, but I don't see pixels from the original image, only single solid color. Also the target rendertexture remains completely white no matter what. From Blit documentation I understand Unity should automatically apply the source image to _MainTex for the material, i.e. to be used by the shader. In shader code, I'm declaring
    in fragment shader. Having
    included or excluded in shader properties doesn't make a difference.

    What I'm I doing wrong? Maybe getting the position from
    in vertex shader is not valid here?
     
  5. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    No, it's not. You need to do projective texturing here.
    Code (csharp):
    1.  
    2. vert {
    3.    output.pos_clip = mul(UNITY_MATRIX_MVP, float4(pos_object, 1));
    4.    output.uv_screen = ComputeScreenPos(output.pos_clip); // float4
    5. }
    6.  
    7. frag {
    8.    float2 uv_screen = input.uv_screen.xy / input.uv_screen.w;
    9. }
    10.  
     
  6. 1400883

    1400883

    Joined:
    Nov 23, 2015
    Posts:
    4
    Hmmm...I wonder how these convert into GLSL. My vertex and fragment shader positioning code ATM is

    Code (CSharp):
    1.   GLSLPROGRAM
    2.     varying vec2 texCoord;
    3.     // Vertex shader
    4.     #ifdef VERTEX
    5.       void main()
    6.       {
    7.         gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    8.         texCoord = gl_MultiTexCoord0.xy;
    9.       }
    10.     #endif
    11.  
    12.     // Fragment shader
    13.     #ifdef FRAGMENT
    14.       uniform sampler2D _MainTex;
    15.       void main()
    16.       {
    17.         gl_FragColor = texture2D(_MainTex, texCoord);
    18.       }
    19.     #endif
    20.   ENDGLSL

    Setting gl_Position as it is is equal to output.pos_clip assignment, right?

    I can't seem to find any documentation for uv_screen() function.

    ComputeScreenPos() seems to divide the position vector components by 2 and do...something else too. Then, in the fragment shader, fragment position (== texCoord?) is divided by its width only, y component too?

    I do have a serious gap in my understading of how projections across coordinate spaces are supposed to be here. I'm pretty much ok with the lack of it, though, as I'd (hopefully) only need to get this particular projection working to finish a project. Any good sources a noob could read just to get the least what's needed to decipher the given example code?
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    So in glsl you have the uniquely named gl_* variables for things, CG has some of those too but they're more abstracted away. The pos_clip and uv_screen variables are just generic variables that have been assigned meaning via a struct @jvo3dc didn't show in his example. In this case it probably looked like this:
    Code (ShaderLab):
    1. struct vertexOutput {
    2.   float4 pos_clip : SV_POSITION; // this is making pos_clip the clip space position, this is the equivalent of gl_position
    3.   float4 uv_screen : TEXCOORD0; // assigning uv_screen to one be of the interpolated vertex properties, this is like varying vec4 uv_screen
    4. }
    That struct is then the output of the vertex function.

    I highly suggest you look at the vert / frag examples in the Unity manual. http://docs.unity3d.com/Manual/SL-VertexFragmentShaderExamples.html
     
    Last edited: Nov 25, 2015
  8. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Yes, sorry about that, bgolus is correct. They are just variables and can be named anything, but they are linked to specific registers.

    This is very offtopic, but I really dislike naming everything just "position". That's why I always include the space the position, normal or other vector is in. (pos_object, pos_world, pos_view, pos_clip.) I also dislike naming a matrix UNITY_MATRIX_MVP. My name for that one is object_to_clip. So you get:
    Code (csharp):
    1.  
    2. float4 pos_clip = mul(object_to_clip, pos_object);
    3. float4 pos_world = mul(object_to_world, pos_object);
    4. pos_clip = mul(world_to_clip, pos_world);
    5.