First off, I'll admit that I am quite new to Cg programs. I'm working on a screen-space ambient occlusion method similar to the one yoggy did a while ago. I have it working using scripts entirely: Currently I have a script that loops through pixels manually and does a single operation: Code (csharp): compositePixels[i] = rgbPixels[i] - (depthPixels[i] - depthBlurredPixels[i]); Can I do the same thing in a CG program? If so, how? Speed: Scene with no extra rendering: ~50 fps Scene w/ rendering blurred and normal depth textures: 35-40 fps Scene with pixel operations from script: 10-15 fps I'm hoping this would be a lot faster using the GPU. Am I right? Thanks!
When doing shaders, it's best to not think of individual pixels. Think in terms of textures or pixel arrays. So assuming you have depth in one (render)texture, blurred depth in another (render)texture and rgb pixels in third (render)texture, all you have to do is take those three and draw a quad that covers full screen. And in the pixel shader (pixel shader is a program that runs for each and every pixel drawn), read three textures and compute the result. The hardest part I think is computing the blurred depth. Many different algorithms exist for blurring textures on the GPU, most often iterative approaches are used if large blur is needed.
I got it working; now it's just a matter of doing all setup from a single script to simplify things. Thanks!