I don't know yet if RenderTextures work for iPhones, but it seems to work for iPad (and Android) since U3.
So I found a way to create a texture from superposition of tons of other textures (special thanks to Dreamora for suggesting GUITextures/Readpixels).
It can be super useful for clothes texture generation, for example.
Update : I finally posted the script down the page : Link
What you need to create :
- a camera prefab that culls nothing else that could be rendered outside of this script. Only attach a Gui Layer to it, and set its depth mask to "don't clear".
- a GUITexture prefab. Put it on the same culling layer than the Camera above. Set its Scale transforms to zero, as explained in the Unity docs.
Now by code in a script, whenever you want to create your texture :
- create a variable containing the targetted dimensions of your texture
- create a new RenderTexture. Dimensions are the var above.
- instantiate your camera & GUITexture prefabs at zero position
- set camera's targetTexture to the freshly created RenderTexture
- Initialize your RTexture with _RenderTextureVar.Create() function (see the docs).
- Now, each time you want to add a texture in your composition, just :
a) set your GUITexture.texture to whatever Resources stored Texture2D (or anywhere you want to take it from, ingame, web, etc).
b) launch a camera.Render() on your Cam instance.
(or launch a WaitForEndOfFrame(), but it's slower obviously)
When you've added all the textures to your composition :
- Launch a RenderTexture.active = _yourRecentlyCreatedRTexture (important place, doesn't work if put before GUITexture.texture assign)
- create your final texture Texture2D container(formatted as RGB24/ARGB, as explained in the docs).
- perform a _finalTextureContainer.ReadPixels() (with your texture dimensions as parameter, see docs).
- perform a RenderTexture.active = null
- perform a _finalTextureContainer.Apply() to create mipmaps.
Destroy your camera, GUITexture pefabs and your RenderTexture.
Advantages over GetPixels methods :
- it's 10+ times faster. With GetPixels/SetPixels/Alpha Lerping operations, I had a calculation time of 0.30 seconds on my home PC, with 11 x 512x512 alpha textures. With this method, I ended with times from 0.02 to 0.01. So for mobile devices, expect to multiply it by higher numbers.
- Operations are GPU side, which now prevent any interference with CPU operations, like animation loading.
- Biggest change for me : you don't need ARGB huge textures anymore. Yes, you can keep your compression settings (except for the computed final texture, indeed). You also don't need any more isReadable state for your source textures. Device memory says "oh god thank you".
- You can perform prerendered visual effect operations on your final texture, by RenderTexture manipulation, like tinting, photo grain, decals, etc. Just add a prerendered effect image on top of your textures layers.
- You can easily resize your final texture dimensions by changing the dimensions variable. No Resize operation needed.
This can be useful for adjusting your texture size to the device memory, for example.
Hints & Tips :
- There seems to be a bug when you cast camera.Render() outside of Unity basic loops (Start, Update, etc), an assertion saying something lime "screenViewCoords < 0 || screenViewCoords < 0" (Sorry I don't recall exactly).
So your best use is to create the whole operation into a MonoBehaviour script that you attach to an object, and which self destroys at the the end of operations.
- Placing your layered bitmaps by script can be hazardous, I suggest you place them directly into your source texture, putting all of them at fixed dimensions, with alpha zero pixels where there is no colors. Then you will only have to place layered bitmaps to zero position.