I am hoping to create a space-orientated sandbox game. Now I do have a rather interesting question. How do I make a procedural terrain generated on a planet (sphere) which a person can fly down to. At the moment, space-to-planet transitions don't bother me, and I don't mind if the terrain is seen from space, I can add Terrain LOD's and what-not later. I am asking, at the moment is. How would I go about procedurally generating terrain (using perlin noise). Then somehow wrapping this around a sphere? I can't use the six-side cube trick method, because my game will be multiplayer, so other people will need to see the planet, etc. Plus you need to be able to fly down into the planet. So, ideas, suggestions? Anything is greatly appreciated.

I found this answer quite a while back. It involves the LibNoise port: http://forum.unity3d.com/threads/68764-LibNoise-Ported-to-Unity OK, first, the difference between Mathf.PerlinNoise to LibNoise: Mathf.PerlinNoise: 2d noise ranges from 0 to 1 LibNoise: 3d noise ranges from -1 to 1 Now, how to make a planet: Start with a cube Each side of a cube is flat, right. However, it is made of points in 3d space. You can spheriphy these points. (point.normalized * radius) Now you have the basics of a sphere, but with the gridding of a cube. For every 3d point on the sphere where the cube matches, get a 3d Perlin Noise using LibNoise. Then simply add this to the equation. (point.normalized * (radius + noise(point * size) * depthOfNoise))

Hey hey. I'm working on a proceduraly generated space sim also, and although I'm not tackling the issue of terrain generation there's still likely some stuff on my blog which you might find useful. In particular you might want to read the stuff about quad sphere generation and mapping which could be used as a starting point for your terrain generation. Check it out here: http://forbiddenfunction.com/main/?cat=11 or take a look in my sig.

Very cool background generator, it's basically a shader which creates the background, or is it true texture maps?

Cheers. Its a little of both. I pre-generate a bunch of high resolution noise maps containing many octaves and pack them into the channels of a bunch of textures in a offline process. This is done offline since it takes a long time to generate. Then in game when a new spacebox is required, I generate some low res low octave noises on the fly which only take a tiny fraction of the time to generate and use these as various blending and mixing factors for the shader which then turns the pre-generated high res noises into space scapes. Doing it this way I get high quality results and a different space-scape each time re-using the same high res noise maps over and over with a generation time of under a second.

I got my old 4D Simplex Noise Library I got way back. I found it more efficient and more effective than Perlin when I used it in a 3D cube-environment scenario. It also has seamless capability which, I think will help. Hopefully its good for what I am hoping to achieve. Ok, so how would I got about this? Do I have the cube already, and modify the mesh or something? Or do I destroy the mesh, rebuild it, then re-render the object. I think the former is what you mean, and the latter would be more difficult. How would I go about modifying the cubes mesh to achieve what you described? Edit: I did, come up with this: It doesn't work tho. What should I fix?

vertices = vertices.normalized * (50 + noise.Noise(vertices.normalized * 100)); 1) dont add the spherify to the vertices. 2) As I thought, you need the 3d position of the noise in spherical proportion. (* 100 would be the scale of the noise, you may want to start smaller, like 1.0 or less)

This thread is just what the doctor ordered. I'd contribute but all the useful bits have already been said. I'll just sit here and watch for important information.

Sorry, I am still really confused. I was having a look through LibNoise and only Noise2D. How exactly do I do this :$ ?

woops, sorry. Its more like this: Code (csharp): var pos =vertices[i].normalized * 100; var noise : IModule = new Billow(); // or whatever noise you want to use vertices[i] = vertices[i].normalized * (50 + noise.GetValue(pos.x, pos.y, pos.z)); All the noise's extend IModule. so if you check it, it only has one method: GetValue(x,y,z) Remember to put the libnoise into your Plugins folder so that JS can see it correctly.

Ok, this is what I got: Now... The results I get, are a slightly morphed. VERY large cube. Here's a picture: It doesn't look like a sphere xD Any ideas for whats going wrong? Or any other methods of approach?

You are using a cube with 24 points.... They are all in the corners. So making it into a sphere will only make it a cube. Go into Max, or whatever 3d program and make a cube with 32 length, width and height segments. This will give you far more points to "spherify"

Indeed, you need to increase the subdivisions of your cube. One other thing though, although simply normalising all the verts of your cube will result in a sphere, you'll find that doing it this way will result in a higher density of verts on your sphere where the cubes corners used to be and they'll be more spread out else where. This will results in an uneven texture resolution distribution and texture warping. Another way to do it that will result in a more even vertex distribution is to use the mapping which can be found here: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html

So... in thoery... if you use a rotational value of 45 + x/numPoints * 90, you will get a number between 45 and 135 based on a angle of the number of points that you get. Calculate your Y angle and X angle and apply them thusly: Code (csharp): var s = (45 + x/numPoints * 90); var t = (45 + y/numPoints * 90); var r = Quaternion.Euler(s,t,0); var p = r * Vector3.up * radius; Would essentially do the same thing as whatever the math is on that page... LOL, sorry I am not a good math transcriptionist.

Ok, I have done two versions, a 32-vert and a 64-vert. I.e higher resolution, lower resolution. The code generated hemispheres, so I instantiated the original, parented it, then flipped it to fit underneath. For some reason, the two hemisphere connecting create a strange mesh tearing sort of thing at the equator. I may have to play around with the noise, I am not 100% happy with it at the moment, but it's coming along. Here's a screenshot: (32 Vert is on the left, 64 Vert is on the right) Yes, I noticed that the mesh is more dense at the poles, whilst less dense at the equator. So how exactly do I go about implementing this? This is the full code I have at the moment: Code (csharp): using UnityEngine; using System.Collections; using System.Collections.Generic; using LibNoise.Unity; using LibNoise.Unity.Generator; using LibNoise.Unity.Operator; public class TerrainGenerator : MonoBehaviour { public int resolution = 512; public int octaves = 5; public bool seamless = true; public float frequency = 0.3f; public float amplitude = 5.0f; private Perlin noise; void Start () { if (gameObject.name.EndsWith("_Orig")) { noise = new Perlin(); Mesh mesh = GetComponent<MeshFilter>().mesh; Vector3[] vertices = mesh.vertices; var i=0; while (i < vertices.Length) { Vector3 pos = vertices[i].normalized * 200; vertices[i] = vertices[i].normalized * (float)(100 + noise.GetValue(pos.x, pos.y, pos.z) * 2); i++; } mesh.vertices = vertices; mesh.RecalculateNormals(); mesh.RecalculateBounds(); Transform Hemisphere = Instantiate(transform) as Transform; Hemisphere.parent = transform; Hemisphere.Rotate(new Vector3(0, 180, 180)); } } } The planet is coming a long nicely, any suggestions or things that should be changed/added? EDIT: Is there anyway to make it create a full sphere? I was just wondering, because applying a texture to fit over the whole thing would be hard (since its broken into two hemispheres). Plus trying to do shading, or whatever.

Ok, so I have interpreted the math of the cube to sphere mapping article. Thanks voidstar <3 This is the math I got: Code (csharp): float sx = pos.x * Mathf.Sqrt(1.0f - pos.y * pos.y * 0.5f - pos.z * pos.z * 0.5f + pos.y * pos.y * pos.z * pos.z / 3.0f); float sy = pos.y * Mathf.Sqrt(1.0f - pos.z * pos.z * 0.5f - pos.x * pos.x * 0.5f + pos.z * pos.z * pos.x * pos.x / 3.0f); float sz = pos.z * Mathf.Sqrt(1.0f - pos.x * pos.x * 0.5f - pos.y * pos.y * 0.5f + pos.x * pos.x * pos.y * pos.y / 3.0f); Vector3 p = new Vector3(sx,sy,sz); Now, currently, this is my code altogether at the moment: Code (csharp): Debug.Log(vertices[1].normalized); Debug.Log(vertices[1]); while (i < vertices.Length) { Vector3 pos = vertices[i]; float sx = pos.x * Mathf.Sqrt(1.0f - pos.y * pos.y * 0.5f - pos.z * pos.z * 0.5f + pos.y * pos.y * pos.z * pos.z / 3.0f); float sy = pos.y * Mathf.Sqrt(1.0f - pos.z * pos.z * 0.5f - pos.x * pos.x * 0.5f + pos.z * pos.z * pos.x * pos.x / 3.0f); float sz = pos.z * Mathf.Sqrt(1.0f - pos.x * pos.x * 0.5f - pos.y * pos.y * 0.5f + pos.x * pos.x * pos.y * pos.y / 3.0f); Vector3 p = new Vector3(sx,sy,sz); vertices[i] = vertices[i] + p * 50; i++; } mesh.vertices = vertices; mesh.RecalculateNormals(); mesh.RecalculateBounds(); Now for some reason, this is doing nothing. The cube does not make any noticeable changes, besides the change in size due to my * 50. I have noticed something very strange. Code (csharp): Debug.Log(vertices[1].normalized); Debug.Log(vertices[1]); I added these in to do some debugging. The normalized vertices return a value (0, -0.7, 0.0) while the vertices just ALWAYS return (0.0,0.0,0.0). Is this supposed to be happening? Or is something wrong. EDIT: Apparently it has something to do with my models. I am exporting from 3DsMax 2009. Using a .max file. But Unity doesn't recognize any of the vertex positions correctly. They all appear as vector.zero. HOWEVER If I export into .obj the vertex positions are recognized. Except the normalized method causes the cube to deform into a sort of two-sided paper-like plane thing. And the sphere mapping (even vertex distribution) - using the code above - doesn't do anything to the cube. I'll attach my model file. (Both .max and .obj) View attachment $PlanetCube64.rar Any ideas on whats wrong?

OK, after more than a few hours or tinkering I finally figured it out. now mind you, I am not using a premade cube, I am generating one. (see the attachment) OK, here is the deal. lets say you are making a cube with 8 segments per side. That is 6 sides, 8 segments each and 9x9 verts to make it. You do this by generating a cube. X, Y and Z are made up when you describe the direction of the side. Forward is positive Z, and X and Y are controlled by the grid creator, Up is positive Y and X and Z are controlled by the grid. (and so on) Now, I did everything at 1 unit scale, so all of my values ranged from -0.5 to 0.5. So what I ended up with was a neat grid in each location of my cube. If you want to create your own mesh.. here is where you then start: OK, Spherifying is this exact formula: point = point.normalized Thats it. It makes a Unit Sphere (a sphere 2 units across) To make it into a radius... simply multiply it times the radius you need. To make it "noisey" first, get the noise value based on two aspects. First, is the original point (point.normalized) and multiply that by the noiseScale. Now, simply multiply it times the noiseAmount and add the radius. fin = radius + noise(x * noiseScale,y * noiseScale,z * noiseScale) * noiseAmount Noise scale refers to the unit noise scale you are gathering from the modifier The bigger the number, the smaller the noise. Noise amount refers to the amount of noise that will appear on the surface. OK, now here is where we have our trouble. The final result of a "normal" cube with segments is that the edges are compressed because there is nothing there to absorb the deformation from the spherification process. So the middle points, are closer to the sphere than the outer points. And so, move less. The ourter points are compressed to match the sphere. Now, the mathematical formula that has been provided here, does not work. As a matter of fact it is a spherification formula. You actually have more than a few factors that go into modding out. I wound up tinkering with many different formulas, trying to use Sin or Cos to gather up the nodes, none of it worked. What I eventually did was to go back to an old standby. An AnimationCurve! This handy little gadget allows you to play with numbers and convert them into a curved representation of themselves. In this case, I had a series of numbers that were from -0.5 to 0.5, so all I had to do was to use my AnimationCurve and get a new X or Y that would stretch certain parts out, making them more compatible with my needs. The AnimationCurve that I used is attached. And no, the previous method that I suggested, never worked. So, as you can see by the attachments, it is rather flexible. I am sure it could be made better though.

The code I interpreted from that math, does infact work. However, to get it to work, it requires that the vertex's be in certain positions. I.e -1.0 to 1.0 I ended up creating a mesh in unity, as my high-res 32 subdivided mesh would become too trivial to make it manually in code. I mean who wants to assign 6936 verts manually I did it in Blender, and got fantastic results. I positioned it perfectly so the vertex positions are scaled to 1. Putting them in positions of -1.0 to 1.0. I incorporated a function to switch between the Normalized method and the mathematical method. I have found that the normalized version has more noise, than that of the mathematical method. I've been playing around with noise types, and my favorite so far is RiggedMultifractal. Here are two screenshots, comparing the EXACT same parameters but one uses the Normalized method, and one doesn't. Normalized Method: Non-Normalized Method: I personally think the latter looks nice. Opinions?

"non-normalized" is this just adjusting the vertex x,y,z according to noise? Well, it isnt really that hard to generate grids and triangles and stuff like that. Yay demo: http://lod3dx.net/Unity/Planetoid.html Also, check into anything you can find about Spore's planet creation.. good read.

"Non-Normalized" means using the sphere mapping equation, rather than point = point.normalized. I am going to experiment with procedural textures next, and some 4D Perlin noise, and some heightmaps. I also need to do some Rayleigh Atmospheric Scattering. I found a great article here: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html Except I am uber bad at Shader writing... so this will be interesting.... Luckily I have found: http://forum.unity3d.com/threads/31163-Atmospheric-Scattering-In-Unity-from-GPU-Gems-2 AND EVEN BETTER! http://forum.unity3d.com/threads/12296-Atmospheric-Scattering-help However, I have no idea on how to use the syntax of the shader. I might go read up on that GPU Gems 2 article. Stay tuned! EDIT: Turns out those shaders require the planet to not rotate and it must have an origin of vector.zero. Looks like I will have to find something else.

How would I go about making a procedural texture that applies based on the height of the vertex. I had a go at assigning the vertex colour, depending on height. Here's my results: Now... as you can see, it looks rather ugly. So I think a texture approach would be better. How would I go about doing this? (The "water" is just a vertex coloured blue, when I do proper procedural texturing, the ocean floor will be sand. Then I will get a new sphere, position it, and make that the "water")

An update. Now with a sun xD WATCH IN 720p!!! http://www.youtube.com/watch?v=pEsAyseavWs Still have to fix up this planet tho. I think the sun looks quite nice, for what I have done. But yea, any suggestions on how to do some nice, procedural texturing?

I do not as yet know. I suspect that you would have to capture each pixel and calculate it's noise location according to its placement, then gather from another set of textures its replacement. The way I did the section for water was to simply clamp the noise to a water level. So there is no secondary mesh. So in my mesh, I actually create 6 different objects with a mesh for each side and fit them together. This allows me to define the level of detail in the planet. Also, it allows me to keep the textures separated So I could do some extra little stuff. My same problem is yours, how to define texture over it. I think mine will be coupled in the same area as yours, however, I want too add some differences. I want my poles to be snow covered. I want an ice planet if the temperature is too cold. I want atmosphere and other things too.

So, my model I exported from Blender is not supporting textures. So I may have to generate the mesh programmically after all, how did you go about doing yours?

pretty simply. lets say that you have 2x2 sides (for simplicitys sake) You want to make a grid on a side. Up, down left right or whatever. so I used vectors for this, not for any other reason that it was pretty simple, it had nothing to do with math. The basics are like this: tri = 0 for y = 0 to segments +1, for x = 0 to segments + 1{ pos = y * (segments + 1) + x get the segment from x and y: s=(x/segments - 0.5) t=(y/segments - 0.5) evaluate the segment against the anim curve I had up before: v= x/segments * 2.0 if (v>1) v= 1-(v-1) u = curve.evaluate(v) * 0.5; if(s>0) u=-u s=u // do the same for t convert the position (s,t) to a directional vector // this is just a bunch of ifs.... if it is up, then it is x,y,z and so on. Just remember that everything is -0.5 to 0.5. So if your up, then you need to be at like (s, -0.5, t) This all is derived from simple block mechanics. Everything is relative to a forward cube. : http://www.keithlantz.net/wp-content/uploads/2011/10/skybox_texture.jpg the center of that image is forward, and, to look at the front of the box. so forward is the loser left corner of the image (that is -x and -y and +z) as you traverse to the right side, you begin in the loser left of that or (-x, -y, -z). Now, each facing you are looking at will always have one side that is the same. So on the front facing, it is always the z, and that z is always positive. On the back, it again is z, but it is always negative. (complex, I know) You just have to visualize a box in your head and remember that S and T are -.5 to .5, so you start in the negatives. so front of (-x,-y,z) is(s,t,0.5) OK, after you are done with that mess, you simply normalized the point. This is a wonderful Unit Sphere now. normals are simply the unit sphere result uvs are simpley a vector2(s+0.5, t+0.5) u need 6 tris for each square and those are like this: Code (csharp): if(x<segments y<segments){ tris[tri] = pos; tris[tri+1] = pos + 1; tris[tri+2] = pos + nP + 1; tris[tri+3] = pos; tris[tri+4] = pos + nP + 1; tris[tri+5] = pos + nP; tri += 6; } once you are done with that, you simply apply them to a mesh and give it a texture or whatever else. From here, you create 6 separate sides. } OK, now New test with two of the above as concepts mixed together. Unfortunately, I could not complete it as the shader got a little complex. http://lod3dx.net/Unity/Planetoid2.html The shader attempts to use a height map that is generated from the noise to do the textures. It works to a point, but I think I will just have to go back to another method to get it done.

Looking good, I have no knowledge on how to write shaders, so this will be my methodology of approach: Firstly, I will obtain the values of the heightmaps and calculate the distance from the normal position, to the applied position. Then I will compare this to an array of values, which determine what type of texture to use. Then, the point of space on the sphere will be translated to the 2D position of the texture. The terrain textures will then be blended (like detailed in that article), to give a nice effect ^^^ ???Think this is a good form of operation??? ^^^ I know I would get away with better performance in a shader, but I really have no shader programming knowledge. With generating the mesh, you wouldn't be able to attach an example script for me could you? I'm just have a little trouble understand (always was a noob with meshes). Also, I don't want to use the animation curve (not yet anyway) - so would I have to remove the evaluation part you mentioned? Thanks!

Sorry, had to move onto another project. Methodology: Create 6 "height maps" that are based in the spherical theory. These maps are about 512x512 each. They are gray-scale and will be your "bump" map. Create 6 splat maps RGBA which define 4 texture spaces to be used. These are high res maps that define the surface in the same spherical orientation. Create 1 shader that uses the spat maps, 4 textures and the bump map to display the map. Keys: The initial sphere must be made of 6 pieces and UV's must be evenly spaced across the surface. (this means that the UV's shouldn't stretch the surface at the edges.) All vertices must be assigned using space calculation according to their UV position. With the exception of edges: Front and rear edges all come from the front texture map edges. Left and right edges come from the corresponding side (front or back) so that they fit together. Top and bottom edges come from the front, back, left and right edges where they meet. All normals must radiate from the center so that edges are undetectable. You don't know shaders.. They are not really that hard. Start here: http://unity3d.com/support/documentation/Components/SL-SurfaceShaderExamples.html It is an excellent example of a build up of the knowledge of how to write Unity shaders. Trick: Find some software that will convert a "bump" map (grayscale height map) to Normal Maps. As well you will need something to convert the normals to tangents. (It is someplace in the forums if I remember correctly.) Use tricks: With a normal map you can display your "bump" map with shading. With the "Normal Extrusion with Vertex Modifier" section on the shader, you can extrude the surface with the bump map to get greater detail. (this is called a Parallax shader) Both combined make this a high detail lit surface that would not be normally capable in gaming geometry. Couple that with 4 textures of various shading then you will get a monstrous surface of detail. Other tricks: Create a lower res sphere (or more than 1) to put your atmospheric effects in. Check out the shaders used in the ShadowGun demo. http://blogs.unity3d.com/2012/03/23/shadowgun-optimizing-for-mobile-sample-level/ Perhaps slowing them down and tinkering with them could yield some very nice effects.

Sorry, It's been awhile since I have posted, but I am looking at way to have very large objects in unity. I think I need a custom coordinate system to integrate, so I can achieve large planetary objects. After searching around and not finding much, I am wondering. How would I go about this? Are there any plugins or something for unity I could get to increase the world size? If I have to design one myself, how would I go about doing it?

World size is simply perspective. A house looks like a big thing to us, but to Godzilla, its a very small thing. If you had a world roughly 1000 units wide and a 2 unit person sitting on top, that is still a big world, but it gets bigger if you have a .5 unit tall person on a 1000 unit world, suddenly, the 1000 unit world looks 4000. It gets even better if you have a .001 unit ship and a 1000 unit world. so it makes a 1000 unit world look 100000.

Heya guys, I wrote a long log post about this a while ago http://www.nullpointer.co.uk/content/procedural-planets/ you can also do it using a voxel solution too http://www.nullpointer.co.uk/content/voxel-skinning-and-virtual-textures/ Though I think most of the points have been covered in this thread already, I thought Id link these, in case they are useful to anyone

Okay. I think, just from my experiments. My camera has made the planets look so small. Does the camera perspective change/is changeable. So that a 1m person views things a lot bigger compared to a larger object? Because when I zoom in it still looks abit small. Perhaps its just my illusion of precision. What looks extremely Zoomed in, is actually a big distance for a 1m person. Anyway, il report back when I make some progress. And thanks anyway, tomnullpointer, any help is greatly appreciated.

Code (csharp): private GameObject CreateSegment(Vector3 direction, int numPoints, float radius, ModuleBase noise, float noiseAmount, float noiseScale) { int Points = numPoints + 1; Vector3[] Vertices = new Vector3[Points * Points]; Vector2[] UVs = new Vector2[Points * Points]; Vector3[] Normals = new Vector3[Points * Points]; int[] VerticeArray = new int[Vertices.Length * 6]; int TriangleIndex = 0; for (int i = 0; i < Points; i++) { for (int j = 0; j < Points; j++) { int Pos = j + (i * Points); float X = (((float) j) / ((float) numPoints)) - 0.5f; float Y = (((float) i) / ((float) numPoints)) - 0.5f; float Z = (((float) j) / ((float) numPoints)) * 2f; Vector3 vector = new Vector3(X, 1f, -Y); if (direction == -Vector3.up) { vector = new Vector3(X, -1f, Y); } if (direction == Vector3.forward) { vector = new Vector3(X, Y, 1f); } if (direction == -Vector3.forward) { vector = new Vector3(-X, Y, -1f); } if (direction == Vector3.right) { vector = new Vector3(-1f, Y, X); } if (direction == -Vector3.right) { vector = new Vector3(1f, Y, -X); } vector.Normalize(); Normals[Pos] = vector; float NoiseValue = radius; if ((noise != null) (noiseAmount > 0.0)) { Vector3 TrigVector = (Vector3) (vector * noiseScale); float sx = TrigVector.x * Mathf.Sqrt(1.0f - TrigVector.y * TrigVector.y * 0.5f - TrigVector.z * TrigVector.z * 0.5f + TrigVector.y * TrigVector.y * TrigVector.z * TrigVector.z / 3.0f); float sy = TrigVector.y * Mathf.Sqrt(1.0f - TrigVector.z * TrigVector.z * 0.5f - TrigVector.x * TrigVector.x * 0.5f + TrigVector.z * TrigVector.z * TrigVector.x * TrigVector.x / 3.0f); float sz = TrigVector.z * Mathf.Sqrt(1.0f - TrigVector.x * TrigVector.x * 0.5f - TrigVector.y * TrigVector.y * 0.5f + TrigVector.x * TrigVector.x * TrigVector.y * TrigVector.y / 3.0f); Vector3 p = new Vector3(sx,sy,sz); NoiseValue = (float)(Radius + noise.GetValue(pos.x, pos.y, pos.z) * NoiseAmount); NoiseValue = Mathf.Clamp(NoiseValue, (waterLevel * (radius + noiseAmount + 1.0f)), radius + noiseAmount + 1.0f); } Vertices[Pos] = (Vector3) (vector * NoiseValue); UVs[Pos] = new Vector2(X + 0.5f, Y + 0.5f); if ((j < numPoints) (i < numPoints)) { VerticeArray[TriangleIndex] = Pos; VerticeArray[TriangleIndex + 1] = Pos + 1; VerticeArray[TriangleIndex + 2] = (Pos + Points) + 1; VerticeArray[TriangleIndex + 3] = Pos; VerticeArray[TriangleIndex + 4] = (Pos + Points) + 1; VerticeArray[TriangleIndex + 5] = Pos + Points; TriangleIndex += 6; } } } GameObject HexiSphereSegment = new GameObject(); HexiSphereSegment.transform.parent = base.transform; HexiSphereSegment.transform.localPosition = Vector3.zero; HexiSphereSegment.transform.rotation = base.transform.rotation; HexiSphereSegment.AddComponent<MeshRenderer>(); MeshFilter filter = HexiSphereSegment.AddComponent<MeshFilter>(); HexiSphereSegment.renderer.material= base.renderer.material; HexiSphereSegment.name = "Up"; if (direction == -Vector3.up) { HexiSphereSegment.name = "Down"; } if (direction == Vector3.forward) { HexiSphereSegment.name = "Forward"; } if (direction == -Vector3.forward) { HexiSphereSegment.name = "Back"; } if (direction == Vector3.right) { HexiSphereSegment.name = "Right"; } if (direction == -Vector3.right) { HexiSphereSegment.name = "Left"; } Mesh mesh = new Mesh(); mesh.vertices = Vertices; mesh.uv = UVs; mesh.normals = Normals; mesh.triangles = VerticeArray; mesh.RecalculateBounds(); filter.mesh = mesh; return HexiSphereSegment; } Hey, So... I interpreted your explanation the best I could. I tried to implement the sphereize method, getting some interesting results though: (This image is taken without the lighting effects of the sun being cast upon the object) As you can see it's rather... odd. Can you see anything in my code I am overlooking or screwing up with?

ok, ur gettiing closer To start, in your calculations of X, Y and Z, your Z needs to always be 0.5. Since ur using 1, it is forcing the face further out than normal, thus you are getting a half size sphere. Code (csharp): float Z = 0.5f; Vector3 vector = new Vector3(X, Z, -Y); if (direction == -Vector3.up) { vector = new Vector3(X, -Z, Y); } if (direction == Vector3.forward) { vector = new Vector3(X, Y, Z); } if (direction == -Vector3.forward) { vector = new Vector3(-X, Y, -Z); } if (direction == Vector3.right) { vector = new Vector3(-Z, Y, X); } if (direction == -Vector3.right) { vector = new Vector3(Z, Y, -X); }

I have done it! And the sphereize formula has successfully and uniformly spread the vertices along all of the mesh! Yay! Here is some screenies, as you can see it's working quite well

Now, onto writing the nice procedural texture stuff. For creating heightmaps from the terrain, how may I go about this? Should I iterate through each of the vertices, calculate the distance of that vert. from it's point of origin, and than somehow represent this in a greyscale bump map? I cam considering experimenting with this shader: http://www.unifycommunity.com/wiki/index.php?title=TerrainFourLayer Sorry, I am just a little confused about this part.

yeh, this is where I got hung up a bit. I ultiimately created a shader that displayed each map according to texture coordinates,but based it off of a height map. This wasnt so hot. I firmly believe that you need something like the 4 texture shader, but you are then going to have to align those textures up and create them all procedurally. It was a big pain in the rump when I did it. I possibly over thought it. There should be a way to reference splat masks like the Terrain asset though that would be cool.

I have been trying to do this for days, Making a right mess of things, Guess I'm not ready for 3d space lol

I've managed to generate a heightmap from code, and then use it with a Four Pass shader to blend the textures together. It's turning into a pain, with texture orientation, and seamlessness, etc. So I come up with another idea. What if we generate a seamless cubemap. Then apply the terrain based on the cubemap. In other news, I'm working on how to do LOD. Currently I'm looking at casting a raycast every second. If the raycast hits one of the six side segments for the sphere, it calculates the distance and the appropriate LOD to use. I'm thinking maybe, as the person gets closer. To increase the vertex amount for that entire segment. But I'm not sure if this is total overkill. I need my person to be able to fly down, onto the planet's surface. How could I use LODs. To achieve what I want?

A simple distance calculation is all that is needed for LOD's. each frame all you have to do is calculate the player's location to the position on the side of that. sideCenter = world.position + direction * radius; distance = (player.position - sideCenter).magnitude; Also, you could find the side that is not visible like so: dot = Vector3.Dot(world.TransformDirection(direction), Camera.main.transform.forward); sideEnabled = (dot > -0.5f); so the more unlike the camera's direction, the more likely it is to be visible.

I've been reading an extremely interesting article: http://tulrich.com/geekstuff/sig-notes.pdf It looks like a highly effective way to achieve the level of detail I want... however. It involves breaking up the six segments, further into smaller chunks. Plus, is this even possible in unity?

Essentially, all I have to do is subdivide the mesh and make them display high detail. I'm a little confused on the methodology of LOD'ing for this. I mean, would you just increase the vertice count for the mesh, or would you have to render entire new noisemaps?

You pre-render your geometry. Say each face segment is an 8x8 face map. So you start with the entire side. you simply make a simple low res 8x8 face. Then you jump to subdivision 1. You make 4 8x8 maps that each take up a quarter of the original map. Now subdivision 2. This is 16 8x8 maps, and so on and so on. To handle this, you need to do some rules. 1: If you are in a subdivided area, never show the lower resolution from that area. 2: Everything is distance based. If you are right next to the area, the highest resolution is shown. As you get farther away the more likely that it is that you will display a lower res mesh. 3: Each side has an associated direction. Use Vector3.Dot to check that direction against the negative direction of the camera. If it is below zero, then do not draw that mesh. Lets look at the pdf you linked. Lets say you are very far away from the planet. You would see the first picture. As you get closer to the world, this map splits into the second picture to display more detail. (4 maps is 4 game objects). Now lets say you get closer to section 1 in that picture. It splits into 4 maps corresponding to the upper left quadrant of picture 3. The other 3 quadrants stay the same. Now move right half way to the next grid, now both top sections should be split. When you achieve the center of the next section. The later would then revert back to a single section. (this is the essence of LOD's)

I just stumbled across this project trying to search for a solution like this, but after fiddling with the scripts and a cube made in Blender i thought id better write and ask if you're willing to share a tutorial and the complete script to create the planets since i haven't got the math skills to make the same as you guys have. Well hope you're willing to share. - Asger

I've been busy with studies recently (I'm only sixteen and Grade 11 can be quite hectic), but I'm working on a dynamic LOD system, and it's coming along nicely. A quick run-down of how the generator works so far: The generator creates 8 different resolutions of the same seamless cubemap (working on the seamless function) of noise. (I.e the planet can be up to 8x of detail - initially each segment has a vertex count of 32x. As the player approaches, the segment sub-divides its mesh into smaller pieces, each smaller piece increases in vertex count (a.k.a detail) progressively as you get closer (32x,64x,96x..256x, etc.) - I'm still playing with the level of detail, I might increase the detail values from 8x, to something MUCH higher, but I'll leave that for the tweaking stage. And of course, respectively, as you move away from each segment (or sub-divided segment), the detail drops down again. As the segment divides, it increases the vertex count (as mentioned above) - if you happen to get closer to say the upper-rightmost quadrant of a certain sub-division, then the generator would first, work out what resolution of noise to use, then it would update the mesh of the quadrant based upon it's location in the noise map.) Essentially, it will work quite well, for a smooth and seamless transition from space to planet. After that, I'm going to either hire someone to write some shaders for me, or knuckle-down and write them myself Stay Tuned!

ok so im a total noob and dont know a thing about programming (am an environment artist) but this is exactly what i need. Would one of you kindly help me out. Feel free to email me. Thanks. triggergamestudios@gmail.com