Search Unity

Very simple question about fragment function

Discussion in 'Shaders' started by Deleted User, Jul 12, 2017.

  1. Deleted User

    Deleted User

    Guest

    Consider than I am learning and not trying to reach every platform, just PC and maybe Android weather willing. Please correct any wrong (general) semantics I use in regards to shader code.

    Code (CSharp):
    1. fixed4 frag (v2f i) : SV_Target (or COLOR...)
    2.             {
    3.                 fixed4 c = (0, 0, 0, 0);
    4.                 return c;
    5.             }
    1.) Any reason at all to use the SV_TARGET semantic over the COLOR semantic, or can I just choose?

    2.) The semantic SV_TARGET dictates this fragment function's returned output will be written to the vertex data's (fragment data? - please correct me) primary color field.

    Is it possible to return a struct instead, writing to more than one data field of the vertex? If so, what would replace SV_TARGET, COLOR, TEXCOORD, or any other single semantic token?
     
    bayboga likes this.
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    SV_Target is the semantic used by DX10+ for fragment shader color output. COLOR is used by DX9 as the fragment output semantic, but is also used by several shader languages for mesh data and vertex output semantics. Using SV_Target means they can easily replace all instances of SV_Target with whatever is needed for each shader language's fragment output, where as if you use COLOR the shader will only really work for DX9 (or OpenGL which also doesn't use COLOR for the fragment output, but the cross compilers are smarter than just text replacement, and DX11 because it can take DX9 shaders and render them still).

    TLDR; Always use SV_Target inside CGPROGRAM blocks.

    The fragment shader runs after the vertex shader and never writes to vertex data. Ever. A fragment shader (or pixel shader) can only write to a render target of some kind.

    The order goes like this:

    application data:
    Vertex positions, vertex uvs, vertex colors, etc. & material properties, transforms, etc. & texture data, etc.​
    vertex shader:
    Takes data from each vertex individually, as well as various shader uniforms, and outputs data in the form of a clip space position and usually texture coordinates or other data.​
    "v2f":
    Output vertex data gets interpolated and rasterized.​
    fragment shader:
    Takes the interpolated data and does per pixel calculations to determine output color.​
    "render texture":
    The output from the fragment shader is stored in a texture buffer (or several) of some form. In the most simple case this is the GPU's frame buffer which is basically just a special render texture the GPU knows to show on the monitor.
    There can be more steps between the vertex shader and fragment shader, and technically the fragment shader doesn't have to output anything, but the vertex shader does always have to output a clip space position, and the fragment shader cannot write to the vertices. The best you can do is output to a render texture that you then map back to the vertices. Usually if you're going to do that you'll have your vertex shader output it's clip space position using the UVs rather than their position. You can also absolutely output data other than "color" from a fragment shader, but it will always be stored in a render texture as a "color" of some kind and via the SV_Target "color" semantic.

    I would suggest you go look for some videos on "shaders 101" or "how do gpus work". It's a complicated process that can take some time to wrap your head around, and there's always some new trick that seems to break the rules, but the fragment shader can never write to vertex data directly.
     
    Last edited: Jul 12, 2017
  3. Deleted User

    Deleted User

    Guest

    Thank you for this post, I bookmarked it.

    This is the 2nd time I have studied shader programming. The first time I had positive results, but mostly used unity surface shaders. I think their existence threw me off because I missed learning what they actual are (a code generation thing by Unity) Writing your own lit vertex shaders with Unity's help looks pretty fun because they provide most of the data you need to do the lighting.

    So many of those resources just turn up people who either dump includes explaining nothing or just write surface shaders. Surface shaders are a wonderful tool, but I'm not satisfied by the level of abstraction going on when I just declare an entire lighting methodology, since I have seen that I can and do understand the basic methodology of lighting based on a light direction (or light + pos) vs a normal direction.

    I'm able to understand what the coordinate space conversions are doing and why they need to be done; even if I couldn't do the matrix math myself, so that's a position I can step backwards from if I ever want to- surface shaders just draw the line too far away from what's actually happening for my taste.

    Even books kinda do a poor job... My problem is I bought Unity-specific ones, and they're more cookbooks, and I have the OpenGL bible thing but I just can't handle it honestly. So I'm probably gonna grab another book; but I'm starting to get the picture. I knew the fragment was pushing out a pixel, I just wasn't sure if it was passing some modified vertex data out and then the engine did stuff with it or if what I was looking at was literally the fragment itself. There are stupid questions :p

    So it's been a solid step forward on the 2nd shader study pass. Core knowledge, thank you.

    So to clarify, the fragment function can output whatever you want to some texture, but in any case it only outputs one 4-number numeric variable (which is often a color).
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    If you want to be specific, it can output up to 8 numeric vectors with up to 4 components each, and optionally pixel depth, and a coverage mask ... and a few more things if you're using OpenGL 4.1+ or DirectX 11.1+...

    ... but usually just a single color as a fixed4.

    Also, as an extra technical tidbit, the fixed value type is defined as a value that has range of at least -2.0 to 2.0 and a precision of at least 1/255 at those extremes. On mobile platforms fixed will really limited to that range... but on desktop it might be a half (16 bit float, ~ +/- 65503) or more likely just a regular float (32 bit float, ~ +/- (3.4*10^38)) which is why all of Unity's shaders output using fixed4 even though they still work with HDR.
     
    Agent0023 likes this.
  5. Deleted User

    Deleted User

    Guest

    I'm glad you mentioned the fixed4 type stuff, I glossed over it and didn't look it up.

    I think I'm gonna stay away from mobile for now though, it's not as fun of a sandbox to be in.