Search Unity

How do I render scalars to a Floating Point RenderTexture?

Discussion in 'Scripting' started by CHPedersen, Oct 13, 2013.

  1. CHPedersen

    CHPedersen

    Joined:
    Mar 2, 2011
    Posts:
    63
    Hi all,

    I'm interested in transferring floats to a shader by rendering them into a RenderTexture using the RFloat RenderTextureFormat. But I find the documentation for this somewhat sparse, and it's unclear to me how I'm supposed to use this format, both on the CPU side and GPU side.

    My goes is to give an object on-screen a color that corresponds to the float I need and then render this object using a camera that writes to the RFloat RenderTexture. Does anyone know how the RFloat format works when a camera is rendering into it?

    Suppose I simply make a color out of the four 8 bit components stored in a 32bit float by bitshifting each color channel out of the float, i.e. Color.r becomes the first 8 bits of the float, Color.g the next 8 bits, and so on. If I use the camera to render an object with that color into the RFloat RenderTexture, will the texture then end up storing the original float? And if so, how do I get it back out of the texture in Cg?

    Normally, when you read from a texture, you store the result in a float4, e.g. something like this:

    float4 color = tex2D(sampler, uv_coords);

    But in the shader, I don't want a 4 component color vector when reading from this texture, I want just a float - It's a floating point texture after all. Is there a version of tex2D that just returns float, or am I supposed to repack the float4 into the original float, or how does this work?