Hi! There seems to be new feature for generating texture atlasses, but I don't understand how would I proceed with this. Where do I combine meshes and assign the material with packed texture? Can this method be used to dynamically reduce batching? -- phantom
All PackTextures does is actually pack multiple textures into one large texture. It gives you the new large texture back and the locations where the input textures are in this texture. This can be used for combining meshes or for some other things (e.g. terrain engine uses it internally to pack textures of detail objects). The mesh combining code you'd have to do yourself though, probably starting from CombineChildren script in standard assets.
Thanks Aras! I thought it works like that, but how would I go assigning this packed texture to the materials? Texture2D.PackTexture() just returns array of Rects.
OK... Now I get it. Just make new empty Texture2D and pack the other ones there. How do I set UV rects for the atlas materials? Would it be feasible to alter meshes UVs somehow? I'm trying to make combining script that would also be able to reduce ammount of materials. That would be a huge time saver for our artist not needing to hand optimize for better batching.
The packing idea is that you'll end up with a single material (otherwise, if meshes will still use different materials, there's not much point in packing). So the process would be roughly like this: 1) find all meshes that you can pack (that use the same shader, won't move, and don't use texture tiling) 2) pack all their textures into a single one 3) create material that uses this single texture and the same shader that original meshes used 4) create one big mesh that combines all the original meshes, similar to what CombineChildren script does. Additionally, for each input mesh, modify UVs so that they use sub-rectangle of the new big texture. In the end you'll have one big mesh, one big texture and one material for everything that is combined. I think Jon Czeck (aarku) has done something like this... Jon, want to share it on the wiki? :roll:
Thanks Aras! Working just fine. Hard part seems to be making generic implementation which would just work most of the time. Here's my simple test implementation for combining child hiearchy to a single material: Code (csharp): using UnityEngine; using System.Collections; public class TexturePacker : MonoBehaviour { public Texture2D packedTexture; // Use this for initialization void Start () { Material newMaterial = new Material(Shader.Find("Diffuse")); Component[] filters = GetComponentsInChildren(typeof(MeshFilter)); Texture2D[] textures = new Texture2D[filters.Length]; for (int i=0;i < filters.Length;i++) { textures[i] = (Texture2D)filters[i].gameObject.renderer.material.mainTexture; } packedTexture=new Texture2D(1024,1024); Rect[] uvs = packedTexture.PackTextures(textures,0,1024); newMaterial.mainTexture = packedTexture; Vector2[] uva,uvb; for (int j=0;j < filters.Length;j++) { filters[j].gameObject.renderer.material=newMaterial; uva = (Vector2[])(((MeshFilter)filters[j]).mesh.uv); uvb = new Vector2[uva.Length]; for (int k=0;k < uva.Length;k++){ uvb[k]=new Vector2((uva[k].x*uvs[j].width)+uvs[j].x, (uva[k].y*uvs[j].height)+uvs[j].y); } ((MeshFilter)filters[j]).mesh.uv=uvb; } } }
Yeah, something like that. Except that your script just assigns the same texture to all objects, which will bring almost zero performance gains. If you want to actually combine the objects, you have to combine their meshes into a single mesh.
Of course, but this is only for test of packed texture on mesh with recalculated UVs. Now I just could use the Combine Children script to achieve a single mesh. I just posted the script because maybe someone else is interested in this one (maybe we could do one helluva mesh combiner for wiki).
I added the combine children script and made it handle submeshes. Now it combines Materials and packs textures per shader basis. It dosen't take other material attributes in count. Atleast I'm going to add Lightmap packing. I'm open for ideas how to make this script better. UPDATED Now also packs lightmaps. Excludes materials with tiling or offsets. Lightmaps will pack only if second uv set exists. Here it comes: Code (csharp): using UnityEngine; using System; using System.Collections; using System.Collections.Generic; public class TexturePacker : MonoBehaviour { public bool generateTriangleStrips = true; private Dictionary<Shader,List<Material>> shaderToMaterial = new Dictionary<Shader,List<Material>>(); private Dictionary<Shader,Material> generatedMaterials = new Dictionary<Shader,Material>(); private Dictionary<Material, Rect> generatedUVs = new Dictionary<Material, Rect>(); private Dictionary<Material, Rect> generatedUV2s = new Dictionary<Material, Rect>(); // Use this for initialization void Start () { Component[] filters = GetComponentsInChildren(typeof(MeshFilter)); // Find all unique shaders in hierarchy. for (int i=0;i < filters.Length;i++) { Renderer curRenderer = filters[i].renderer; if (curRenderer != null curRenderer.enabled curRenderer.material != null) { Material[] materials = curRenderer.sharedMaterials; if (materials != null) { foreach (Material mat in materials) { if (((mat.HasProperty("_LightMap") !(((MeshFilter)filters[i]).mesh.uv2.Length == 0) mat.GetTexture("_LightMap") != null) || !(mat.HasProperty("_LightMap"))) (mat.mainTextureScale==new Vector2(1.0f,1.0f) (mat.mainTextureOffset==Vector2.zero)) ) { if (mat.shader != null mat.mainTexture != null) { if (shaderToMaterial.ContainsKey(mat.shader)) { shaderToMaterial[mat.shader].Add(mat); } else { shaderToMaterial[mat.shader]=new List<Material>(); shaderToMaterial[mat.shader].Add(mat); } } } } } } } // Pack textures per shader basis and generate UV rect and material dictinaries. foreach (Shader key in shaderToMaterial.Keys) { Texture2D packedTexture=new Texture2D(1024,1024); Texture2D[] texs = new Texture2D[shaderToMaterial[key].Count]; generatedMaterials[key] = new Material(key); for (int i=0;i < texs.Length; i++) { texs[i] = shaderToMaterial[key][i].mainTexture as Texture2D; } Rect[] uvs = packedTexture.PackTextures(texs,0,2048); generatedMaterials[key].CopyPropertiesFromMaterial(shaderToMaterial[key][0]); generatedMaterials[key].mainTexture=packedTexture; for (int i=0;i < texs.Length; i++) { if (shaderToMaterial[key][i].HasProperty("_LightMap")) { texs[i] = shaderToMaterial[key][i].GetTexture("_LightMap") as Texture2D; } } packedTexture=new Texture2D(1024,1024); Rect[] uvs2 = packedTexture.PackTextures(texs,0,2048); if (generatedMaterials[key].HasProperty("_LightMap")) { generatedMaterials[key].SetTexture("_LightMap", packedTexture); } for (int i=0;i < texs.Length; i++) { generatedUVs[shaderToMaterial[key][i]] = uvs[i]; generatedUV2s[shaderToMaterial[key][i]] = uvs2[i]; } } Vector2[] uv,uv2; // Calculate new UVs for all submeshes and assign generated materials. for (int i=0;i < filters.Length;i++) { int subMeshCount = ((MeshFilter)filters[i]).mesh.subMeshCount; Material[] mats = filters[i].gameObject.renderer.sharedMaterials; uv = (Vector2[])(((MeshFilter)filters[i]).mesh.uv); uv2 = (Vector2[])(((MeshFilter)filters[i]).mesh.uv2); for (int j=0; j < subMeshCount; j++) { if ( generatedUVs.ContainsKey(mats[j])) { Rect uvs = generatedUVs[mats[j]]; Rect uvs2 = generatedUV2s[mats[j]]; int[] subMeshVertices = DeleteDuplicates(((MeshFilter)filters[i]).mesh.GetTriangles(j)) as int[]; mats[j]=generatedMaterials[filters[i].gameObject.renderer.sharedMaterials[j].shader]; foreach (int vert in subMeshVertices) { uv[vert]=new Vector2((uv[vert].x*uvs.width)+uvs.x, (uv[vert].y*uvs.height)+uvs.y); if (uv2!=null !(uv2.Length==0)) { uv2[vert]=new Vector2((uv2[vert].x*uvs2.width)+uvs2.x, (uv2[vert].y*uvs2.height)+uvs2.y); } } } } filters[i].gameObject.renderer.sharedMaterials=mats; ((MeshFilter)filters[i]).mesh.uv=uv; if (uv2!=null !(uv2.Length==0)) { ((MeshFilter)filters[i]).mesh.uv2=uv2; } } // Combine Meshes CombineMeshes(); } // Combine Children script to be called after Material and Texture packing. private void CombineMeshes() { Component[] filters = GetComponentsInChildren(typeof(MeshFilter)); Matrix4x4 myTransform = transform.worldToLocalMatrix; Hashtable materialToMesh= new Hashtable(); for (int i=0;i<filters.Length;i++) { MeshFilter filter = (MeshFilter)filters[i]; Renderer curRenderer = filters[i].renderer; MeshCombineUtility.MeshInstance instance = new MeshCombineUtility.MeshInstance (); instance.mesh = filter.sharedMesh; if (curRenderer != null curRenderer.enabled instance.mesh != null) { instance.transform = myTransform * filter.transform.localToWorldMatrix; Material[] materials = curRenderer.sharedMaterials; for (int m=0;m<materials.Length;m++) { instance.subMeshIndex = System.Math.Min(m, instance.mesh.subMeshCount - 1); ArrayList objects = (ArrayList)materialToMesh[materials[m]]; if (objects != null) { objects.Add(instance); } else { objects = new ArrayList (); objects.Add(instance); materialToMesh.Add(materials[m], objects); } } curRenderer.enabled = false; } } foreach (DictionaryEntry de in materialToMesh) { ArrayList elements = (ArrayList)de.Value; MeshCombineUtility.MeshInstance[] instances = (MeshCombineUtility.MeshInstance[])elements.ToArray(typeof(MeshCombineUtility.MeshInstance)); // We have a maximum of one material, so just attach the mesh to our own game object if (materialToMesh.Count == 1) { // Make sure we have a mesh filter renderer if (GetComponent(typeof(MeshFilter)) == null) gameObject.AddComponent(typeof(MeshFilter)); if (!GetComponent("MeshRenderer")) gameObject.AddComponent("MeshRenderer"); MeshFilter filter = (MeshFilter)GetComponent(typeof(MeshFilter)); filter.mesh = MeshCombineUtility.Combine(instances, generateTriangleStrips); renderer.material = (Material)de.Key; renderer.enabled = true; } // We have multiple materials to take care of, build one mesh / gameobject for each material // and parent it to this object else { GameObject go = new GameObject("Combined mesh"); go.transform.parent = transform; go.transform.localScale = Vector3.one; go.transform.localRotation = Quaternion.identity; go.transform.localPosition = Vector3.zero; go.AddComponent(typeof(MeshFilter)); go.AddComponent("MeshRenderer"); go.renderer.material = (Material)de.Key; MeshFilter filter = (MeshFilter)go.GetComponent(typeof(MeshFilter)); filter.mesh = MeshCombineUtility.Combine(instances, generateTriangleStrips); } } } public static Array DeleteDuplicates(Array arr) { // this procedure works only with vectors if (arr.Rank != 1 ) throw new ArgumentException("Multiple-dimension arrays are not supported"); // we use a hashtable to track duplicates // make the hash table large enough to avoid memory re-allocations Hashtable ht = new Hashtable(arr.Length * 2); // we will store unique elements in this ArrayList ArrayList elements = new ArrayList(); foreach (object Value in arr) { if ( !ht.Contains(Value) ) { // we've found a non duplicate elements.Add(Value); // remember it for later ht.Add(Value, null); } } // return an array of same type as the original array return elements.ToArray(arr.GetType().GetElementType()); } }
This is very cool. I can see how this may become very useful for me and many others. I haven't tested it yet, but I hope to soon. Very nice!
Very nice! What are the odds of having a version of this working for 1.6.2? Unfortunately the project I am working on fails ( bug submitted several times ) to import into Unity 2.0 and these features would be SUPER helpful. Regards, -- Clint
Odds getting this working on 1.6.2 are very close to zero. My script uses generics which is .net 2.0 feature and it uses Unity 2.0 api. I have been thinking some new features tho. I think it would be very useful to generate color lookup texture for non-textured materials so that those could be combined too. I'll propably also add exclude lists per object and per shader basis so that you can hand pick materials and objects you don't want to combine. Maybe layer choosing functionality for generated meshes would be useful too.
There's one big problem when packing textures. What if they won't fit? Script needs to group textures for packing in optimal way. The big problem is that I have to guestimate how good job PackTextures() does. Best solution would be if Unity provided method for packing on multiple atlasses. Aras or anyone at Unity Technologies can you provide information how PackTextures() does the actual packing so that I can group textures in a way that optimal packing is achieved?
If textures don't fit, they are decreased in size until they fit. Currently when packing fails, all textures are decreased twice in each dimension and packing is attempted again. No texture is made smaller than 4 pixels though. If ultimately packing still fails, PackTextures returns a null rectangles array. Current packing algorithm is very similar to this one: http://www.blackpawn.com/texts/lightmaps/default.html and performs quite well in general. Basically, largest textures will end up in top-left area of the texture. By the way, your packing script seems to be perfect candidate for adding to Unify Wiki! http://www.unifycommunity.com/wiki/index.php?title=Main_Page
I've added texture generation for non-textured objects(Generate Color Textures toggle). exclusion of objects. Now second uv set is generated if lightmapped without second uvs(Generate Light map UVs toggle). Anything with advanced shaders like normal maps is no go. Color for generated materials is selected from generated material color attribute. This one still needs testing so I would really appreciate if you report any bugs you find to me.
I'm afraid I don't quite understand the usefulness of the PackTextures function. It seems like, after you generate the textures, you then need to manually edit the UVs of every mesh in your game so they are lined up with the atlas. Wouldn't it be simpler to design the texture atlas first, in photoshop or whatever, and design your meshes based around that? Or is there some hypothetical script that would remap the UVs one by one for you, based on the rectangle array? I suspect this is a more robust solution than I think it is, and I'm just not understanding the description correctly...
definition of packtextures says : So you don't have to calculate UVs, they're already set for you. You can also do it manually on a 3rd party tool, though. But it won't be as optimized because of remaining useless white spaces.
@Aras: Does PackTextures() use the GetPixels() / Set Pixels() functions or does it use faster internal ones?
Building a script for it is not difficult. It links up with the SpriteManager quite effectively if you want dynamically created Sprite sheets (as opposed to static ones) It allows you to pack a lot more visuals, as you don't have to worry about unused textures clogging vram, but still keep your draw count low. The returned Rect array refers directly to the ordering of the images passed in. It is also in the same format as the SpriteManagers UVs and can be set directly. Otherwise you modify the materials parameters to match the returned Rect and then give it all the same shared texture. The big difference between packTextures over Photoshop is that it is done at runtime. So the player can choose his own textures or you can pick from any that you want. It also allows you to have each object retain it's own image, but have the underlying scripts handle grouping similar objects and batching.
Sorry for the grave digging, but the topic was quite appropriate. Do you guys create the texture atlases dynamically and update UV:s for the meshes upon game start (i.e from a Start() script callback)? Static batching doesn't seem to play nice with such functionality since it's performed before the scripts activate and seems to break when modifing uv:s for the combined mesh instance. The batching functionality in general could use some documentation love! Thanks for any tips on how the normal workflow goes, I'm currently torn between static batching OR dynamic texture / material atlasing.
I'm currently making a 2D game for the iPad and was hoping to use a texture atlas to save on my memory footprint. Unity conserves textures best when they are square and a Power of 2 (128x128, 256x256, 512x512, etc). Having a bunch of odd-sized 2D elements, it would be best if I could put them all on the same big sheet. Unfortunately no one has done a tutorial on this, so I'm not even sure if this tool is what I am looking for. I would like to see someone explain this process in a video, a lot of the educated comments on here are lost on me.
Did you try the script from this thread? Basically it groups textures with the same material settings, builds an atlas of those textures and reassigns the UV:s on the corresponding models. There's not much more to it unless you want the syntax explained.
Hey guys, I realize this is an old post but I can't seem to find a clarification anywhere. I have a character mesh with different body parts (head, body, arms, legs) and they are each in an asset bundle using the same process as in the CharacterCustomization example, and at runtime I combine all the meshes into one (with separate sub-meshes for each body part) and one material per body part (again, just like the CharacterCustomization example). However, I get the feeling that having all these materials for one character is a waste (they all have the same exact shader), so I would like to optimize the process by packing all the textures into a texture atlas and using only one material. I've gone through all the steps outlined above including adjusting the UVs but it's not perfect; for some reason some of the body parts have seams, so something in the UV process is off. The following screenshot is what appears, and as you can see it has seams in them. What am I doing wrong? Many thanks in advance, El Diablo
Dude, some of us are watching threads and get emails with every reply. Enough with the bumps. A common reason for seams is that you haven't left any padding space in your models' UV mapping, or you haven't left any padding between the images in your atlas compositing. If your modeling software fitted the UV coordinates, there's a big chance it's pixel perfect. Don't do that when you intend to rescale later. I didn't go through your code, but there is a fully working (at least commonly used) code snippet for atlasing which Phantom supplied earlier in this thread. If that doesn't cut it, try padding.
Thanks for responding and apologies for the bumps; I'm not aware of any other method to keep a thread current and visible. I already tried padding the atlas but that made no difference at all. As for phantom's code, we're doing the same thing; our code is identical for all intents and purposes. One thing I can add is that when we made the mesh, we simply assigned a lambert shader with one solid color on it to the mesh's body parts; this was done to save the modeler time since I didn't care about the exact textures at the time, I just wanted the model ready for import. I presume that Maya created the default UVs. Upon import into unity, I deleted the automatically generated lambert materials and created my own set of materials using different color shades. At runtime, I take the main texture from each material and pack them into an atlas, and I combine all meshes into one, then I adjust the UVs of each mesh to refer to the appropriate subregion of the atlas. Given all that I have said, do you think that the problem is that I'm letting Maya create the default UVs or that the UV adjustment algorithm is wrong? (it's what phantom does) or perhaps something else entirely?
The only thing I can think of then is that Maya has snapped to the UV edges (all vertices have a 0.0 or 1.0 UV coord). Just try unwrapping the model (in some random way, doesn't matter) so that it doesn't stay so close to the UV edges. Or if your modeler isn't around, just indent the mesh's UV coords a little from a Unity script before merging them into the atlas, i.e clamp both UV coords to (0.01, 0.99). If that doesn't prove successful, at least you've ruled out rounding errors in the merge / rescaling.
I programmatically changed all the UV coords from 0 to 0.01 and from 1 to 0.99 but no luck. Even tried 0.1 and 0.9, but still nothing. This is in addition to the texture atlas being padded by 8. I think this test effectively rules that theory out. Not sure what to do now.
I know this is an old thread however I had the same problem with a model of mine. I tried the same method on the models from the character customization tutorial and they seemed fine. So my guess is that the issue with the seams is related to the model unwrap. My test model was composed of unwrapped cubes and spheres but the models from the tutorial are properly unwrapped. So make sure your character model is unwrapped in the authoring tool and try again.
Sorry for the gravediggin, but I have some big questions before I start playing around with PackTextures: - Why rule out meshes "that move"? Whats the problem with the movement? Is is a problem if the meshes move in world space, but are static to each other, or only if the y move in relation to each other? - I assume I can't combine meshes that move in relation to each other. Can I still combine their materials and reduce the drawCall count? - what about materials that use bump mapping and an emissive map? can I also pack these textures the same way as the diffuse map, or will it mess up the UVs? What I try to achieve is the following: I have a game prototype with cars that are made up of many submeshes, almost each with its own material (only the wheels and dampers share materials, so its around 12 distinct materials)... I would like to combine the materials of all these meshes into one, to reduce the drawcall count significantly. All Materials share the same shader, a bumped specular emissive shader. Now we may want to give the player the possibility to "modify" his car and make this modification visible by swapping out parts ingame, so packing everything into an atlas in Photoshop or other tools wont cut it. I would like to combine the materials at runtime (combining the meshes would also be nice, but I assume I can only combine meshes that are static in relation to each other), if this is possible somehow. Any comments? Is this possible? Thanks in advance Gian-Reto
error CS0246: The type or namespace name `MeshCombineUtility' could not be found. Are you missing a using directive or an assembly reference? api change ?maybie
I have the same problem not to mention that I can't download the attached texturepacker_306.cs file, I get a sign-in error.
The API says: "If makeNoLongerReadable is true then the texture will be marked as no longer readable and memory will be freed after uploading to the GPU. By default makeNoLongerReadable is set to false." So setting this to true should result in a difference in the profiler, but it doesn´t. Bug?
From my experience you won't see the freed memory in the editor, but you will notice the difference when profiling the build. At least for Mesh.UploadMeshData this rule applies.