Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

colored speculars and bump map

Discussion in 'iOS and tvOS' started by bumba, Nov 10, 2008.

  1. bumba

    bumba

    Joined:
    Oct 10, 2008
    Posts:
    358
    does the engine support colored speculars? And which sort of bump maps? (dot3 or grayscale info?) . Bump maps are good for perdormance instead of using high modelled models?
     
  2. the_motionblur

    the_motionblur

    Joined:
    Mar 4, 2008
    Posts:
    1,774
  3. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    iPhone's GPU is quite simple in fact, very comparable to the original GeForce or RIVA TNT2. That is, two texture stages, some combine operations between them.

    Yes, iPhone's GPU can do a DOT3 operation in the combiner. So in theory you can use object space normal maps. But it's not fast (hey, it's a phone!)
     
  4. ReJ

    ReJ

    Unity Technologies

    Joined:
    Nov 1, 2008
    Posts:
    378
    It depends on a number of polygons you plan for your model. Plus vertex processing and rasterization are running in parallel on iPhone - it would be hardly any gain to reduce number of vertices in the scene from say 7K to 2.5K and use normalmaps.

    You should keep in mind, that normalmaps suffer a lot due to PVRTC compression (read useless). You should (almost always) avoid uncompressed textures due relatively small texture cache on the GPU and relatively limited memory bandwidth of the phone.

    iPhone GPU support only object/world space normalmaps, so you can use them only on a static meshes right now. We might add normalmaps support for skinned meshes later on.

    And reminder: it's a phone after all!
     
  5. bumba

    bumba

    Joined:
    Oct 10, 2008
    Posts:
    358
    are normalmaps and bump maps the same? ... so you advise not using grayscale bumpmaps?
     
  6. ReJ

    ReJ

    Unity Technologies

    Joined:
    Nov 1, 2008
    Posts:
    378
    Yes. It's just more specific term, meaning that normals are stored in them (not greyscale heightmap) and that they will be used for per-pixel DOT3 operation.
     
  7. bumba

    bumba

    Joined:
    Oct 10, 2008
    Posts:
    358
    ok thanks, but grayscale bump maps do not decrease performance, right?
     
  8. ReJ

    ReJ

    Unity Technologies

    Joined:
    Nov 1, 2008
    Posts:
    378
    Grayscale bump maps are not supported by hardware. They will be converted to normalmaps by Unity.
     
  9. bumba

    bumba

    Joined:
    Oct 10, 2008
    Posts:
    358
    ok... i read in the forum to increase performance you should use bump maps. But when they will be converted to normal maps this is will not increase it, right?
     
  10. jbud

    jbud

    Joined:
    Jan 6, 2009
    Posts:
    49
    Hi Everyone

    Im just starting off with unity for iphone and so far its been awesome.

    To clarify on this topic? can I use a DOT3 operation to create a bumpmap effect?

    Does anyone have an example of the shader code which does the DOT3 combine? I would be extremely grateful :)

    cheers
    Jonas
     
  11. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    Here's some stuff I cooked up in GameJam after Unite conference: project folder

    There is some strange lighting flickering going on, though. I don't know why, never investigated (but hey, I don't even have the iPhone!)
     
  12. Dreamora

    Dreamora

    Joined:
    Apr 5, 2008
    Posts:
    26,601
    Sure that Dot3 is per pixel?
    I always thought its per NN texture pixel so per texel on the model.
    The smaller the texture, the worse it looks.

    Also it commonly requires vertex lights to be present as Dot3 NNs depend on vertex color data (unless Unity links them in through a second mesh to unlink them from the color data. Lights are a second thing you might want to avoid already on its own if performance is your target and use Lightmaps instead wherever possible.

    On the iphone you can definitely skip the old "wisedom" that you can reduce polygons by using Normalmaps and get better performance. Its more likely that it will further degrade.
     
  13. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    Is is a combiner operation. Texture combiners, just like pixel shaders, are calculated for each pixel rendered.
    Naturally. This is true for any form of normal mapping. And it's true for regular textures as well!

    Why? It does Dot3 per pixel. Most often this is done using the texture and a constant that represents the light direction. I see no vertex colors involved here.

    Dot3 does not use classical "lights". There is no vertex lighting. It does dot3 between a texture texel and a constant which is set from script.

    Yes, you are correct that using Dot3 on the iPhone is not very practical. But not because of some vertex colors or vertex lighting. I'd say it's because PVRTC compressed normal maps look really bad, and using uncompressed textures is really slow.
     
  14. Dreamora

    Dreamora

    Joined:
    Apr 5, 2008
    Posts:
    26,601
    Hmm per pixel ... wait that means that you do that stuff all on iphone CPU, not through GL texture blending ops then??
    I assumed that it only does that for operations that are written in shaderlab to be per pixel but that stuff like simple multiplicative / additive are handled through hardware blending ops?!

    But yeah doing it through CPU removes the need to use the vertex pipeline as you can inject any data you want.
     
  15. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    I'm not sure what you mean by "all that stuff"...

    Did you look at what my Dot3 sample project does? It's like this:

    * On the GPU, for each pixel: dot3(texture,constant)
    * On the CPU, once each frame: set the constant

    There's no vertex lighting needed because, well, it's not used in the shader. It only uses the texture and the constant.
     
  16. Dreamora

    Dreamora

    Joined:
    Apr 5, 2008
    Posts:
    26,601
    In that case it actually is per texel then, not per pixel.

    Per pixel normally refers to screen pixel and TnL can only do operations per source texture pixel which would be per texel rendering wise.
    Per pixel operations, as done through shaders, do their work basing on screen pixel, which then refers to per pixel normal mapping / bump mapping and so on.


    Or am I missing something in my equation here, so I fail misserably to understand what you attempt to tell me?

    I am asking these likely annoying questions to understand where Unity puts the stuff in during the rasterization process so I can estimate the impact it will have on the final result.
     
  17. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    You're being confused somewhere here.

    (note: TnL - Transform and Lighting - is not involved with pixels or texels at all. It's the fixed function brother of vertex shaders. I'm assuming you meant fixed function pixel pipeline aka multitexture cascade aka texture combiners (in OpenGL speak) are texture stage pipeline (in D3D speak)).

    Combiner operations are just the younger brother of pixel shaders. There is no principal difference between them at all. They compute the color of the pixel. They are executed by the graphics card for each pixel that it renders. The only (really) difference between them is that pixel shaders have more capabilities.

    If someone really wanted to, one could call iPhone's GPU as having "pixel shaders with two instruction slots and one temporary register". See, I just made iPhone support pixel shaders, yay!

    The GPU does no work per texture texel whatsoever.


    Unity does not do anything by itself here. It just hands the meshes and the textures and the shader setup to the GPU. The GPU processes the vertices (with TnL or vertex shaders) and rasterizes the pixels (with combiners of pixel shaders). On the iPhone, those are always TnL and combiners, respectively. But it's the same on any GPU in fact. In any case, neither Unity nor GPU does work per texel.
     
  18. Dreamora

    Dreamora

    Joined:
    Apr 5, 2008
    Posts:
    26,601
    Thank you for that indepth explanation.

    I always assumed that things like additive blending etc are done with the source textures noted in the shaderlab material, and that these blendings then happen on the GPU and the GPU then stretches that resulting temporary texture over the desired geometry. That would have made the blending per texel (unless I missunderstood what a texel is back when I read into the DX7 FFP) so the resulting blending then would be per texel, and not per pixel as it happens independent of the rendered texture's pixels.

    But it is great to hear that the combiner step happens after it has been put onto the geometry so it can then happen on pixel base, which results in much smoother results than it would give the other way round.
     
  19. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    That would be insanely slow and inefficient. Just think about it - each object in the scene would need to create temporary render textures so it can store the results of the blending operations. Tons of wasted memory, tons of render texture switches, tons of complexity for no good reason.
     
  20. dmorton

    dmorton

    Joined:
    Jan 14, 2009
    Posts:
    119
    Im playing around with dot3 trying to figure it out

    Is it the case that the normal of the object the texture belongs to doesnt come into play with the dot3 operation? i.e. that the dot3 combiner ONLY operates on a vector3 constant dot3 the texture pixel as a vector3. This would mean that moving the constant light source is the only operation that affects the output.

    If thats the case, is there any way to get the object normal into play, e.g. by writing out the interpolated normal plus the normal map value in one pass, and then performing the dot3 operation in another pass? Could I just shove the normals into the colors array?

    Could I use a spheremap holding eye-relative normals encoded into RGB values, which are then dot3'ed with the lightsource - argh - this wont work either...

    Hmm.
     
  21. hula

    hula

    Joined:
    Jan 22, 2009
    Posts:
    78
    hey there,
    I have a shader with a normal map that works with the unity remote, but is gone when i publish it. The gem shader goes too actually. Has this happened to anyone else?

    Thanks!
    CP
     
  22. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    iPhone doesn't do normal maps or any other pixel shaders. Unity remote only streams video from the Unity editor. Put the graphics emulation on "iPhone" to see what you'd actually get on an iPhone.

    --Eric
     
  23. hula

    hula

    Joined:
    Jan 22, 2009
    Posts:
    78
    hmm... i definately thought iphone supported normal maps?
    I just tried it in the iphone simulator and the normal shows. I just want to check are you 100% sure?

    thanks,
    cp
     
  24. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    Yes, I am 100% sure. There's a possible method of doing normal mapping on the iPhone, but Unity doesn't support it and so far I haven't seen it in any other games either.

    --Eric
     
  25. Dreamora

    Dreamora

    Joined:
    Apr 5, 2008
    Posts:
    26,601
    The iphone supports Dot3 Normalmapping

    It thought does not support bumpmapping or any light dependent type of normal mapping.

    Dot3 Normalmapping depends on normal colors found on the model / texture depending how your combiner looks alike.
     
  26. dmorton

    dmorton

    Joined:
    Jan 14, 2009
    Posts:
    119
    Actually, you can do something very very close to bump mapping in Unity. That is, bump mapping on dynamic moving meshes, such that the bump vectors are relative to the moving mesh vertex worldspace normals (as opposed to normal mapping, in which the bump vectors are constant - if you rotate the mesh, the bump normals still point the same way).

    The Dot combiner can do (texture dot3 constant) but it can also do (texture dot3 primary), where primary is a vertex color.

    If you encode a transformed light direction vector into the vertex colors, that light direction will be interpolated and dot3'ed with the bump vector.

    The problem is that transforming the light direction vector involves a number of vector and matrix operations which are best performed in vertex-shader code (unsupported on iphone for some reason) or written in assembler.

    Even so, the operation will be expensive - at best, your vertices will double and triple in cost.
     
  27. davarus

    davarus

    Joined:
    Mar 13, 2009
    Posts:
    2
    I just ran into this forum and thought I'd share some knowledge. I don't use Unity but am developing a quick game for the iPhone. I've been playing around with DOT3 Normal Mapping. Here is a method for colored lighting with dot3 normal mapping and a diffuse texture. Be warned it is *SLOW*. This is because it requires one initial pass to seed the depth buffer and then one pass per colored light contributing the model. It is also because it's using the blend function which slows things down greatly on the actual iPhone. See note on the blending voodoo below as it might be possible to avoid major pain if you only have one light illuminating one model. (You can have multiple lights but only one will light one model.)

    /* Setup voodoo blending operation.
    The blend function basically takes the incoming pixel color and multiplies it by the incoming alpha value. It then simply adds this value to the value already in the buffer. The depth function is set to GL_EQUAL and you should have already rendered to the depth buffer the scene.

    NOTE: You can get by setting GL_ONE to GL_ZERO if you only want one light to shine on the model. This might in theory speed things up significantly. I haven't tried. If you take that method you can also skip seeding the depth buffer and set the glDepthFunc to GL_LESS or GL_LEQUAL.
    */
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE);
    glEnable(GL_DEPTH_TEST);
    glDepthFunc(GL_EQUAL);

    /* Setup the light vector.
    x,y,z is the vector (IN OBJECT SPACE!) to the light source
    */
    glColor4f(x,y,z,1.0);

    /* Texture Unit 0
    mod->tex_norm is the GLInt bound to the normal-map texture. It should be in object space. Also compressing the texture will make it look like total ass. Even using an uncompressed 16-bit texture can have noticeably artifacts.
    mod->uv is the array of texture coordinates. It is *HIGHLY* recommended to not use floats as is done here and to use unsigned bytes if possible or if not unsigned shorts.

    The actual texture unit 0 is setup to combine the primary color (previous step) with the normal-map texture using DOT3 in all four channels (Red, Green, Blue, AND Alpha. [This is important for colored lights].
    */
    glActiveTexture(GL_TEXTURE0);
    glClientActiveTexture(GL_TEXTURE0);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, mod->tex_norm);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glTexCoordPointer(2, GL_FLOAT, 0, mod->uv);

    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGBA);
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PRIMARY_COLOR);
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);


    /* Texture Unit 1
    mod->tex_diff is the regular diffuse texture map binding for the model.
    mod->uv is the same coordinates as before. It's only duplicated here for completeness but really you should not send them as this wastes bandwidth and would only be needed if you wanted to use different coordinates for the diffuse and normal map (VERY unlikely... Especially using object space normal mapping). Also the note from Texture Unit 0 applies. Use unsigned bytes if possible.
    lite->diffuse is the color of the light multiplied by the attenuation factor. The attenuation factor can be calculated by doing a dot3 operation on the distance the object is from the light and the following vector: [ light constant attenuation factor, light linear attenuation factor, light quadratic attenuation factor] (the same values you'd send to glLightfv).

    This is where we do some voodoo. The texture unit on the RGB space multiples the constant by the diffuse texture discarding the previous step. The alpha channel, however, is left at the default of combing the previous step with the texture unit. Well the texture unit's default alpha should be 1.0 unless you actually stored an alpha in the normal-map (DON'T). So the RGB space is the diffuse texture map multiplied by the colored light. The alpha is the actual intensity of the normal mapping. When rendered it will combine the alpha with the rgb producing the correct result thanks to the previous glBlendFunc.
    */
    glActiveTexture(GL_TEXTURE1);
    glClientActiveTexture(GL_TEXTURE1);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, 1);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glTexCoordPointer(2, GL_FLOAT, 0, mod->uv);

    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_CONSTANT);

    glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, lite->diffuse);

    /* Do the deed
    Draw the actual stuff. You DO NOT and SHOULD NOT send color values or normal values per vertex.
    */
    glEnableClientState(GL_VERTEX_ARRAY);
    glVertexPointer(3, GL_FLOAT, 0, mod->verts);
    glDrawElements(GL_TRIANGLES, mod->faces_num * 3, GL_UNSIGNED_SHORT, mod->faces);


    Some notes:

    As I said. This is painfully slow and I'm not sure it's realistic value but I list it like this as this is about as complete as you are going to get. Some things that can be done to speed it up. Remove the alpha blending voodoo and have texture unit 1 multiply the diffuse RGB by the results of unit 0. This means you get no colored lighting. In addition for that method to work you will also need to multiply the normalized vector in object space to the light by the attenuation factor. This is not 100% proper but does work and allows your objects not to always be 100% bright by the light.

    If that wasn't slow enough for you you can also do specular mapping. I've got two methods of doing that. In short one method requires 3 passes but can be used as the seed pass. It involves rendering the dot3 normal map multiplied by itself using texture unit 1 and then by itself again by using the SRC_ALPHA trick. Then pass two is to render again using the same permaters and a glBlend of (GL_ZERO, GL_SRC_COLOR); The third and final specular pass is to render just like the diffuse dot3 pass outlined about with a blend of (GL_ZERO, GL_SRC_COLOR); Be sure to run pass 0 as GL_LESS for the depth func and the next two as GL_EQUAL, and then all diffuse passes as GL_EQUAL. I've tested this and it works but is *VERY SLOW*. I'm about to try another method and that is to use the built in OpenGL lighting model to render just the specular of the object as a seed pass using GL_LESS and then rendering the DOT3 as done in the normal diffuse pass on top of that. That saves one pass and allows more control over the specular power as well as more accurate. I haven't tested that method yet. The first method only allows one light to cast a specular on an object. The second, in theory, might allow two lights.

    Finally if for some reason you think you've got extra CPU time you can do tangent space normal mapping (allows object animation) by calculating the vector to the light per vertex and sending that in via the color array. Doing that you can even do per vertex attenuation instead of per object.

    I'm not sure the practicality of any of this as it's slow. DIRT slow. For several reasons. In no particular order: One being because the normal map has to be in at least 16-bit uncompressed format. Two being the added passes (at least one per light). Three being that using blending on the iPhone slows things down significantly. And the fourth I forget. (This post is getting long)

    I realize all of this has nothing to do with Unity but I felt like sharing somewhere and someone here might appreciate the info. Feel free to ask any questions or better if you have some suggestions or improvements, I'd love to hear them.
     
  28. davarus

    davarus

    Joined:
    Mar 13, 2009
    Posts:
    2
    The vectors must be normalized and then multiplied by 0.5 and then 0.5 added to each component. This is because a normalized vector is -1.0 to 1.0 and the color space is 0.0 to 1.0 (mapped to positive integers either 0-255, 0-64, or 0-32.
     
  29. jdm

    jdm

    Joined:
    Jan 3, 2009
    Posts:
    86
    Hey all,

    I'm trying to get dot3 normalmapping to work in my iphone shader and I'm getting errors whenever I try it in the editor with Graphics Emulation set to iPhone... I've tried the sample dot3 project posted, and in fact any shader that includes the line
    Code (csharp):
    1.  
    2. combine texture dot3 constant
    3.  
    gives me an error that 'no subshaders can run on this graphics card'. Can anyone who has gotten a dot3 combine to work (either the posted one or their own shader) paste their shader code here?

    This is the posted one I am trying (I also set up the script that sets the DotLightDirection)


    Code (csharp):
    1.  
    2. Shader "Dot3" {
    3. Properties {
    4.     _MainTex ("Base (RGB)", 2D) = "white" {}
    5.     _BumpMap ("Bump", 2D) = "white" {}
    6. }
    7. SubShader {
    8.     Pass {
    9.         SetTexture[_BumpMap] {
    10.             constantColor [_DotLightDirection]
    11.             combine texture dot3 constant
    12.         }
    13.         SetTexture[_MainTex] {
    14.             combine texture * previous
    15.         }
    16.     }
    17. }
    18. }
    19.  
    thanks!

    -jdm
     
  30. dmorton

    dmorton

    Joined:
    Jan 14, 2009
    Posts:
    119
    I have the same problem.

    Its an error not to have the SubShader tag in your shader file, but iPhone emulation cant handle them.

    Is there a way around this?
     
  31. Jonathan Czeck

    Jonathan Czeck

    Joined:
    Mar 17, 2005
    Posts:
    1,713
    I believe the "flickering" is due to that the dot3 operation is seemingly severely quantized. Like there are only 4-8 or so values output. If you look carefully at an extreme example you can see some posterization artifacts/noise around the black areas.

    But I've got it looking wicked anyways. 8)

    The only problem has been getting the constant into the shader for each object. OnWillRenderObject, OnRenderObject, don't seem to be doing the trick. :/

    Cheers,
    -Jon
     
  32. Jonathan Czeck

    Jonathan Czeck

    Joined:
    Mar 17, 2005
    Posts:
    1,713
    Only thing to do is to not use emulation and just have your shaders iPhone compatible in the first place so you can get closer results. The emulation doesn't really do much anyways.

    -Jon
     
  33. Jessy

    Jessy

    Joined:
    Jun 7, 2007
    Posts:
    7,325
    Where can I get information about what dot3 actually does?


    Edit - After some playing around with Aras's project:
    Okay, that's really simple. dot3 just takes the dot product between the RGB components of two colors.

    1. Why is it so hard to find that information on the internet?

    2. Why is it called dot3 instead of dot? Is it just to suggest that the alpha component is not taken into effect?
     
  34. ReJ

    ReJ

    Unity Technologies

    Joined:
    Nov 1, 2008
    Posts:
    378
    Yes
     
  35. Jessy

    Jessy

    Joined:
    Jun 7, 2007
    Posts:
    7,325
    But, it works for the alpha channel also, by itself. So why not call it dot3or1? :p
     
  36. gateway69

    gateway69

    Joined:
    Feb 18, 2010
    Posts:
    94
    Anyone willing to share the iphone normal map shader.. ?
     
  37. yltang

    yltang

    Joined:
    Nov 20, 2010
    Posts:
    28

    This demo works fine,
    but I do not know how to generate the bump map file of this demo
    name like PNG-Garg1bodylo.NMF.nao.png
    it seems different from the general bump map file we have been seen.

    It very grateful If someone could teach me how to generate the bump map file