Anybody know how to get all the lightPos and Color in the scene from deferred light? especially for point and spot light.
Color you get through LightColor. Positions, maybe you can get it from LightMatrix0 (World to light matrix) but i am sure you dont want to go with the hassle. Or you can approximately calculate it via the light direction. Easier way is to write a script to pass the positions of selected lights data via a render texture?
make a 1x8 texture for any light in scene(max 8 in this case) { if(light[any] in screen) setrendertexturepixel[any(1 to 8)] = light normalized relative pos to camera, alpha is approximation of some eyespace distance for unused pixels, set to black. } in shader; do something if pixel is not black { light pos = CameraPos + pixel color.rgb * pixel.alpha }
because in the shader you will add this value to the camera position and the texture wont accept a value higher than 1, you will have to come up with a value that is relative to camera position and between 0 and 1 values on each channel. Good thing is, you can use the alpha channel to do approximation between farthest and closest light positions and interpolate other lights in between.
On the second thought, assuming you wont have hundreds of lights, why dont you just pass an array of vector positions to your shader if the maximum light count is less than lets say 4-5?
Well i was planning to do it in deferred since there's a high chance to have more than 8 lights even thought that kinda overkill. But yeah that's my first plan.
Reviving an old thread. Thanks for the suggestion aubergine, however scaling the relative position by the far plane to retrieve homogeneous values to insert into the texture for subsequent re-scaling results in lost/inaccurate light positions. In an effort to bypass this I tried using Aras' EncodeFloatRGBA and DecodeFloatRGBA (http://aras-p.info/blog/2009/07/30/encoding-floats-to-rgba-the-final/) on each individual component of the position. This of course expands the texture size four-fold, but so long as we could get accurate results that is all that I am interested in. It may work well in CG, however when converted to use in C# it struggles to work. Oddly, the X and Z values are accurate, while Y is not. Component for texture generation: Code (CSharp): using UnityEngine; using System.Collections; [RequireComponent(typeof(Camera))] public class DeferredLightPositionsComponent : MonoBehaviour { private Texture2D light_position_texture = null; private int light_length; private Camera camera; void Awake () { camera = GetComponent<Camera> (); } private static Color EncodeFloatRGBA (float v) { v = Mathf.Min(v, 0.99999f); Vector4 enc = new Vector4(1.0f, 255.0f, 65025.0f, 16581375.0f) * v; enc.x -= Mathf.Floor (enc.x); enc.y -= Mathf.Floor (enc.y); enc.z -= Mathf.Floor (enc.z); enc.w -= Mathf.Floor (enc.w); enc.x -= enc.y * (1.0f / 255.0f); enc.y -= enc.z * (1.0f / 255.0f); enc.z -= enc.w * (1.0f / 255.0f); return new Color(enc.x, enc.y, enc.z, enc.w); } private static float DecodeFloatRGBA (Color rgba) { Vector4 enc = new Vector4 (rgba.r, rgba.g, rgba.b, rgba.a); return Vector4.Dot (enc, new Vector4 (1.0f, 1.0f / 255.0f, 1.0f / 65025.0f, 1.0f / 16581375.0f)); } void Update () { Light[] lights = FindObjectsOfType(typeof(Light)) as Light[]; if (light_position_texture == null) { light_length = lights.Length; light_position_texture = new Texture2D (light_length * 4 + 1, 1); } else if(light_length != lights.Length) { light_length = lights.Length; light_position_texture.Resize(light_length * 4 + 1, 1); } light_position_texture.SetPixel (0, 0, EncodeFloatRGBA((float)(light_length * 4 + 1) / camera.farClipPlane)); for(int i = 0; i < lights.Length; i++) { Vector3 light_position = lights[i].transform.position - camera.transform.position; float distance = light_position.magnitude / camera.farClipPlane; light_position.Normalize(); light_position_texture.SetPixel(i * 4 + 1, 0, EncodeFloatRGBA(light_position.x)); light_position_texture.SetPixel(i * 4 + 2, 0, EncodeFloatRGBA(light_position.y)); light_position_texture.SetPixel(i * 4 + 3, 0, EncodeFloatRGBA(light_position.z)); light_position_texture.SetPixel(i * 4 + 4, 0, EncodeFloatRGBA(distance)); /* NOTE: Scaling to 0..1 results in loss of precision, this DOES NOT work with any amount of accuracy! * light_position_texture.SetPixel(i + 1, 0, new Color(light_position.x, light_position.y, light_position.z, distance)); */ } light_position_texture.Apply (); Shader.SetGlobalTexture ("_LightPositionTexture", light_position_texture); } } Retrieving the info from the texture in CG: Code (CSharp): sampler2D _LightPositionTexture; float LightPositionsSize() { return DecodeFloatRGBA(tex2D(_LightPositionTexture, float2(0.0, 0.0))) * _ProjectionParams.z; } float3 LightPositionAt(int i) { float size = LightPositionsSize(); float xd = DecodeFloatRGBA(tex2D(_LightPositionTexture, float2((float)(i * 4 + 1) / size, 0.0))); float yd = DecodeFloatRGBA(tex2D(_LightPositionTexture, float2((float)(i * 4 + 2) / size, 0.0))); float zd = DecodeFloatRGBA(tex2D(_LightPositionTexture, float2((float)(i * 4 + 3) / size, 0.0))); float wd = DecodeFloatRGBA(tex2D(_LightPositionTexture, float2((float)(i * 4 + 4) / size, 0.0))); return _WorldSpaceCameraPos + float3(xd, yd, zd) * wd * _ProjectionParams.z; } If someone could get this to work that would be great.
Why not just use an RGBAHalf texture? Or if you're doing this for desktop you can use structured buffers.
Compute buffers would make much more sense of this problem. I suppose that's an indication that I need a card newer than my rusty Radeon 5770. Thanks for the suggestions.