Search Unity

Float has 'too much' precision

Discussion in 'Scripting' started by Genkidevelopment, Jan 31, 2015.

  1. Genkidevelopment

    Genkidevelopment

    Joined:
    Jan 2, 2015
    Posts:
    186
    Hi

    I am running lots of equations in a fixed update, by lots I mean its about 100 at the moment and I estimate will be closer to 500 when I am done... And although currently I am experiencing no performance issues, I am worried that too many of the calculation are using a float and as such are calculating with far too much precision.

    33.7687654 could really be more like 33.77!!! I am unable to completely drop the decimals and use an Integer as I do need a certain amount of precision...

    Am I correct in thinking the extra decimals used by a float are a potential performance problem?

    If so, can anyone advise me of a more suitable variable type?

    If, none, I have read that you can set (by code) the number of decimal places a float will calculate to, would this actually help, or would it simply induce more work after the usual float is calculated normally?

    Bless, peace and thanks ;)
     
  2. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    No, the the operations always have the exact same performance.
     
  3. Genkidevelopment

    Genkidevelopment

    Joined:
    Jan 2, 2015
    Posts:
    186
    Ok thank you for teaching me that... So the performance cost is always measured in 'x' bits? As I see that some variable types are 32bit, 64bit etc?

    Thanks again
     
  4. Graham-Dunnett

    Graham-Dunnett

    Administrator

    Joined:
    Jun 2, 2009
    Posts:
    4,287
    No, operations on a 32-bit float and a 64-bit double can be performed by modern CPUs at the same speed.
     
  5. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,514
    I get the feeling this is a stupid question but... If this is so then why aren't doubles the default? Such as transforms.
     
  6. Genkidevelopment

    Genkidevelopment

    Joined:
    Jan 2, 2015
    Posts:
    186
    Yeah, My logic has troubles understanding that the machine can perform (1.1 * 2.0) only as quickly as (1.143287 * 1.997645) as an example!
     
  7. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Maybe mobiles still cry.
     
    Kiwasi likes this.
  8. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Yah, 64 isn't quite totally dominant yet I guess. Also reading 32 bits from memory is probably faster still than reading 64?

    In blitzmax for example, math performed on 32-bit ints is definitely faster than any other data type. Depends what cpu's are optimized for and if the numbers are in registers or memory cache etc.

    If you are really concerned that floats are slower than ints you could also used fixed point math, e.g. 16 bits for integer, 16 bits for decimal. Then /65536 to round down to integer value.
     
  9. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    I believe many graphics cards still like 32 bit floats as well.
     
  10. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,528
    32-bit is still the standard in game design primarily because the entire dev pipeline of tools haven't all come over to 64-bit. And some of them probably won't without kicking and screaming just because they perform better at 32-bit.

    For example, models still primarily store 32-bit floats for its data. To go 64-bit would increase the storage size of the models.

    Graphics cards do still use 32-bit.

    Graphics in general fit very well into the 32-bit space. And it's easy to just maintain consistency of data type across all of that. Of course though as we need more and more memory, we're going to need a large bit depth to access it, GPUs are nearing the 4 gig limit pretty soon, seeing as people are already SLI'n 2 and even 4 cards with 1 to 2 gigs of memory each. A single card with the power of one of these cards isn't too far away. Getting through that gap will be interesting though, as the base memory size will immediately increase as the system word size doubles.
     
  11. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,514
    So, basically 64 bit is expected to be the standard in a few years after the appropriate hardware/software catches up but it can't happen until its more standardized across the board.