Hi, Is it possible to blend between what camera1 sees and what camera2 sees ? In other words: Fade out what camera1, which was the main camera, sees while fading in what camera2 sees.
You can do a render-to-texture of one camera onto a partially transparent plain, and have the other camera look at the plain. (Unity pro only) I don't know of any other ways.
Render one camera into a RenderTexture that is the size of the screen. Then blend it over another camera.
wadamw: They could be in motion. Aras Daniel: That's basically what I tried. I would need a 2048*2048 renderTexture and that's not the only renderTextre I need to create all the wanted effects. So I would end up with at least two 2048*2048 renderTextures or even three if I want to render the blending to another mask-plane (with e.g. holes)... -two renderTextures already push the framerate down to around 15/second since I also need 12 1024*1024 textures in the scene at the same time. I guess I'll just fade the some objects in/out which should/not be visible in the scene... Would be cool if the camera would have an optional opacity property to make it possible to blend between different cameras in a future version. -What's a good workflow to capture only 1 frame of the camera view (like screenshot) and create a texture of it at runtime?
Are you running at this resolution (that would be a strange resolution...)? You can create RenderTextures that are not constrained to power-of-two sizes from scripts (see RenderTexture.isPowerOfTwo). Ok, but to blend the result of the camera over another camera, what would be needed: * Render the camera into some temporary memory buffer * Blend that memory buffer over existing image. Which is exactly what using a RenderTexture does, right? Render into a render texture for one frame. Then use that as the texture. In Unity 2.0 there will be functionality to get pixels from the screen into a texture.
Thanks! Great. I overlooked that. I'll probably do that then. That's true. But... what about interactivity ? If I render something to texture it doesn't have any complex interactivity by default like if I use Normalized View Port Rect to e.g. create a split screen. I was thinking about to create a script which restores the interactivity for rendered textures. I guess the camera functions like ViewportPointToRay and WorldToViewportPoint etc. are a good start to look at... (?) Good news. --Though I feel a little bit lost with the RenderTexture functions: -GetTemporary: Snapshot what the camera sees for one frame ? Or as long as I didn't use ReleaseTemporary ? -ReleaseTemporary: To destroy the temorary created RenderTexture or just to stop GetTemporary ? How would I render to texture for only one frame ?
I'm not sure what you mean by "interactivity" here... so, uhm... This creates a new render texture and gives that back to you. That's all it does. This says "I don't need this render texture that you gave me earlier using GetTemporary anymore".
By "interactivity" I meant basically the clickability. And by clickability I mean that objects with a collider are clickable with the mouse-cursor and call e.g. the OnMouseDown() function. RenderTextures which show objects, which are originally clickable, don't have this detailed clickabiliy, just because it's only a texture (one clickable object and not a bunch of them) and not what the camera shows directly on the screen as original (and not rendered to texture before) objects. Or did I miss something ? OK. So Create() and Destroy() does basically do the same but these functions are more meant for a longer (than temporary) useage ? - And how would I tell the renderTexture to create only one "shot" of what the camera sees and attach it to a plane as texture (which doesn't change again) ? Would that be a good example when I should use GetTemporary and ReleaseTemporary ?