Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

What is faster to convert float to int ? (int) randomFloat or Mathf.RoundToInt(randomFloat)

Discussion in 'Scripting' started by b4cksp4ce, Sep 16, 2014.

  1. b4cksp4ce

    b4cksp4ce

    Joined:
    Apr 13, 2011
    Posts:
    114
    Hey !

    I was wondering if there's a différence between these two methods to convert float to int and if there is, which one is faster ?

    Thanks
     
  2. StarManta

    StarManta

    Joined:
    Oct 23, 2006
    Posts:
    8,773
    Are you doing them thousands upon thousands of times per frame? If not, they're going to be close enough to not matter.

    No one's likely to have this answer ready on the top of their head, and you can do the same tests that anyone else would. Make a script that iterates 100k times per frame doing the conversion with one method, then the other, and see if there is any difference in performance.
     
  3. secondbreakfast

    secondbreakfast

    Joined:
    Jan 5, 2013
    Posts:
    98
    A cast is going to be faster, but it's negligible. Like a couple back seconds. Rounding will probably add a float then do a cast so it's more code. But it really doesn't make a bit of difference. You probably have much slower code in your project than this one.
     
  4. KelsoMRK

    KelsoMRK

    Joined:
    Jul 18, 2010
    Posts:
    5,539
    The more important question is what the intended outcome is. Casting to int will truncate the decimal portion of a float whereas rounding will...round it - for example: casting 3.6f to int will result in 3 where as rounding it will give you 4. Which do you want?
     
  5. b4cksp4ce

    b4cksp4ce

    Joined:
    Apr 13, 2011
    Posts:
    114
    Okay, I knew how to test this but I figure it was faster to post a thread and had an answer from someone who already did it.

    I use this a lot but not enough for it to matter I think. I was just a question that I was asking myself.

    I didn't know casting tuncate, I though it also rounded the float. Thanks for that input.

    Thanks for your quick responses.
     
  6. Cpt Chuckles

    Cpt Chuckles

    Joined:
    Dec 31, 2012
    Posts:
    86
    you can do rounding manually
    int muh_integer = (int) (muh_float + 0.5f);
     
  7. b4cksp4ce

    b4cksp4ce

    Joined:
    Apr 13, 2011
    Posts:
    114
    Indeed !
     
  8. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
    what if muh_float is negative?

    Maybe try this one:
    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public static int RoundToInt(this float t) => t >= 0 ? (int)( t+0.5f ) : -(int)( 0.5f-t);
     
  9. halley

    halley

    Joined:
    Aug 26, 2013
    Posts:
    2,366
    Dude, this thread was dead and buried five years ago. It's all just a useless thought exercise in Premature Optimization.

    First, make the code work. Second, make the code work correctly. Third, make the code work fast. And when you are on step three, measure what is really your bottleneck and focus intently on nothing else, because everything else is a waste of time and effort. Opinions on performance without measurements are also a waste of time and effort.
     
    Nad_B, PraetorBlue, Ryiah and 2 others like this.
  10. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
    I can understand and respect that you prefer slow, inefficient code. Everyone has their thing. Why would you want to be a triple-A developer anyway? Nothing wrong with throwing performance out of the window for now and letting someone else deal with it later.
    But I do not understand why you would criticize a bug fix.
    When I wrote that response, my motivation was to ensure that no one looking through old topics would see the code use it as it, and then have to deal with unintutitve bugs that result from the conversion of negative floats to ints.
     
    JasonBricco, BaraShiro and Antypodish like this.
  11. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,754
    Other than it is necro, it is not useless. Really bad attitude.
    You don't know if readers are beginner or advanced, far deep in project, or other circumstance, looking for any type of optimization.
    Such question as OP, is good way to learn, other than run own tests and is worth to ask.
    It gives opportunity to find other methods, which wasn't aware about before.
    True is, once game is out, very few devs look into such optimizations, as is often too late to go back.
     
    BaraShiro, HaruLee932001 and DavidSWu like this.
  12. SGM3

    SGM3

    Joined:
    Jun 10, 2013
    Posts:
    81
    Once you know which is better it is no longer premature optimization. It just becomes common practice. Much like bubble sort vs quick sort. Which is faster? Would you say someone that implemented quick sort is engaging in premature optimization? Nothing wrong with exploring computer science, but I suppose if you are employed to work on a project, and you spend a week doing research on optimal solutions, then there may arise a problem.
     
    SoftwareGeezers and Antypodish like this.
  13. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,965
    Great job rolling a one on your language comprehension check. Optimization is important but you need to understand what part of the code is worth optimizing and what is not. The OP mentioned that they use the operation a lot, but that they didn't think it mattered. This shows that they were just guessing.

    Always profile, both before and after, premature or otherwise, because what you think may improve the situation may not have a meaningful effect on your program, or worse it might have a negative impact, but you won't know without profiling.
     
    Last edited: May 26, 2019
    SGM3 and Antypodish like this.
  14. SGM3

    SGM3

    Joined:
    Jun 10, 2013
    Posts:
    81
    Good stuff. Knew a guy that was all too quick to jump into assembly. If it is critical then maybe worth it, but why jump into a 1-1 dialog with the cpu if it may not be needed.
     
  15. Zamaroht

    Zamaroht

    Joined:
    Nov 4, 2013
    Posts:
    23
    Hey, sorry to necro this.
    But I feel the accusations are unjust. In my scenario I stumbled upon this thread since I'm doing this operation thousands of times a frame (because of a crowd mechanic) and after profiling, the amount of time that Mathf.CeilToInt and .FloorToInt take is relevant.

    I found that replacing Mathf.CeiltToInt / .FloorToInt / .RoundToInt with my custom methods improved performance of these calls by 50-70%.

    Careful: This may break with negative inputs. In my scenario I'm certain the values are always positive.

    Code (CSharp):
    1. int FloorToInt(float val)
    2. {
    3.     return (int) val;
    4. }
    5.        
    6. int RoundToInt(float val)
    7. {
    8.     return (int) (val + 0.5f);
    9. }
    10.        
    11. int CeilToInt(float val)
    12. {
    13.     return (int) (val + 1f);
    14. }
    I'm profiling this in the editor, using a Macbook M1 Max 32/32.
     
  16. KelsoMRK

    KelsoMRK

    Joined:
    Jul 18, 2010
    Posts:
    5,539
    The Unity implementations just type cast the results from the System.Math library which are implemented in the CLR so it does make sense that just doing the math yourself would be faster if you've got a high enough iteration count.

    https://github.com/Unity-Technologies/UnityCsReference/blob/master/Runtime/Export/Math/Mathf.cs
    Code (csharp):
    1.  
    2. // Returns the smallest integer greater to or equal to /f/.
    3. public static int CeilToInt(float f) { return (int)Math.Ceiling(f); }
    4.  
    5. // Returns the largest integer smaller to or equal to /f/.
    6. public static int FloorToInt(float f) { return (int)Math.Floor(f); }
    7.  
    8. // Returns /f/ rounded to the nearest integer.
    9. public static int RoundToInt(float f) { return (int)Math.Round(f); }
    10.  
     
    BaraShiro and Zamaroht like this.
  17. SoftwareGeezers

    SoftwareGeezers

    Joined:
    Jun 22, 2013
    Posts:
    902
    Disagree. As does Mike Acton

    "...any one who says premature optimisation right now can leave the room - that is the most abused quote of all time."

    -


    Where's Mike Acton now? Directing development of DOTS at Unity. ;)

    Optimising code down the line after making it work can result in really slow developer-friendly code that hardware just can't run fast, and code that's impossible to make run fast. Understanding architecture and how computers process your code, and employing best practices all the way through, is the only sane way to make performant code. Any applicable knowledge towards that is a Good Thing; if it's not particularly impactful (casting versus rounding, as opposed to large-scale data structures and algorithms), you are free to eschew it for development reasons, but it's far better to know up front, no?
     
    Last edited: Nov 27, 2022
    JasonBricco likes this.
  18. StarManta

    StarManta

    Joined:
    Oct 23, 2006
    Posts:
    8,773
    Context is king. Are you developing a system that will be used by thousands of developers across thousands of projects, whose sole reason for existing is to execute highly optimized code? If so, then yes, you should pre-optimize, that's literally your job. Are you developing a puzzle game, whose entire performance is measured by human eyes and is running on hardware vastly overpowered to run it? You probably don't need to optimize your match-finding algorithm unless your profiling indicates you do.
     
  19. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,070
    @KelsoMRK
    Since .net 5 we have the official 32-bit MathF library at our disposal (introduced in Core 2.0 which we couldn't use with Unity), which is actually supposed to be used over Math and Mathf (in 32-bit context). Some common 32-bit overloads, such as Abs and Sign are still in Math, and these are likely simply aggregated by Mathf. However I welcome everybody to switch to MathF when it comes to rounding, trigonometry, square roots etc. As far as I can tell, Unity hasn't upgraded Mathf, and probably won't.

    For anything more complex/advanced (Burst compiler, SIMD, matrices, shader-like syntax) use mathematics package.

    I find your advice very reasonable. Still, there is plenty of grey area. I fully expect such a "puzzle game" to be demolished by modern computers, so it should still be engineered in such a way that feels sufficient, not below that, so it's quite an elusive argument, to try and specify what "sufficient" means.

    In other words, each project has several zones of code base quality, and of course there is always a tiny sweet spot sliver sitting somewhere in between the distasteful waste of dev time on one end, and a grotesque waste of electrical energy that calls itself a software on the other. Thus the argument scales accordingly. Nobody expects a match-3 game to run expensive algorithms in real time, but many people would be rightful in expecting it to work on their fridge display, with all the bells and whistles. The expectations grow inversely proportional to the hardware cost.

    And quite frankly from all the products I've seen, none of them actually suffer from premature optimizations, but most of them suffer from being under-developed. That could be a survivorship bias -- i.e. the too-optimized-ones rarely ship. Therefore, though this is kind of an old-school advice everybody regurgitates, I'm not sure what the warning against this accomplishes?

    Let's consider two opposite scenarios:
    a) If the dev doesn't feel that the optimization is 'premature' and likes to work on things that don't matter, this advice won't help him ship anyways, because the dev will struggle to keep this going on full cycle. And b) if the dev knows better, he will aim for the sweet spot (and more or less fail due to market constraints) regardless.

    No harm is done either way (on average), and this job is hard enough as it is.

    With all that said, I don't believe in premature optimization as a thing. I believe only in a possibility that some people do not completely understand the parameters of the system and are unable or unwilling to manage dev time appropriately. It's these two things that usually contribute to someone chasing after the white rabbit. And then it always ends up being a learning process, and not actually a bad practice. Mostly because in the regular freelance work you don't have much time to do this regardless. Maybe this was a thing historically.

    Finally, the reason why this is such a popular advice, is because the production managers use it do disarm and deflect what they see as issues in commercial (budget-based) production: 1) juniors actually chasing after the white rabbit, 2) seniors actually calling for legit refactoring. So they earn bonuses by "saving the company money" and then write books about it. That's all there is to it.

    It's an oxymoron btw. If it's an optimization, then it can never be premature. So it's probably not an optimization anyway. Something's off.
     
  20. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,070
    Btw, I love how this subject somehow polarizes people. It's crazy. I can see good arguments on both ends of the conversation ever since this was necroed.
     
  21. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,520
    I find that agonizing about performance falls under The Programmers Creed:

    We do these things not because they are easy, but because we thought they WOULD be easy
    .

    It's far too common to hear "OMG comparing strings is baaaaaaaaad!" and then they convert their super-easy-to-read program that uses strings into some weird GUID-based system that introduces 27 new potential build steps and possibilities for errors and bookkeeping mistakes and then come posting here asking why it isn't faster and now they have no idea why GUID 37489234ef33421f55 is not their health potion anymore.

    Since we have raised the dead, I'll post my optimization blurb:

    DO NOT OPTIMIZE "JUST BECAUSE..." If you don't have a problem, DO NOT OPTIMIZE!

    If you DO have a problem, there is only ONE way to find out. Always start by using the profiler:

    Window -> Analysis -> Profiler

    Failure to use the profiler first means you're just guessing, making a mess of your code for no good reason.

    Not only that but performance on platform A will likely be completely different than platform B. Test on the platform(s) that you care about, and test to the extent that it is worth your effort, and no more.

    https://forum.unity.com/threads/is-...ng-square-roots-in-2021.1111063/#post-7148770

    Remember that optimized code is ALWAYS harder to work with and more brittle, making subsequent feature development difficult or impossible, or incurring massive technical debt on future development.

    Notes on optimizing UnityEngine.UI setups:

    https://forum.unity.com/threads/how...form-data-into-an-array.1134520/#post-7289413

    At a minimum you want to clearly understand what performance issues you are having:

    - running too slowly?
    - loading too slowly?
    - using too much runtime memory?
    - final bundle too large?
    - too much network traffic?
    - something else?

    If you are unable to engage the profiler, then your next solution is gross guessing changes, such as "reimport all textures as 32x32 tiny textures" or "replace some complex 3D objects with cubes/capsules" to try and figure out what is bogging you down.

    Each experiment you do may give you intel about what is causing the performance issue that you identified. More importantly let you eliminate candidates for optimization. For instance if you swap out your biggest textures with 32x32 stamps and you STILL have a problem, you may be able to eliminate textures as an issue and move onto something else.

    This sort of speculative optimization assumes you're properly using source control so it takes one click to revert to the way your project was before if there is no improvement, while carefully making notes about what you have tried and more importantly what results it has had.
     
    KelsoMRK likes this.
  22. SoftwareGeezers

    SoftwareGeezers

    Joined:
    Jun 22, 2013
    Posts:
    902
    And my two cents, it's never too early to think about optimisation when it makes sense to. Don't avoid optimisation just because you're early in development. Looking ahead for when you'll need to worry about performance and factoring that in ASAP is the best way to have trouble-free optimisation.

    In short, the problem isn't premature optimisation, but overly aggressive optimisation. There's nothing wrong with optimising early when it makes sense, but you don't need to optimise absolutely everything to it's best version as you go.

    Some of the fundamentals for Unity are really slow and you need to design around them. If you just build your game with instantiating objects and garbage collection etc., you'll have a lot of work to tidy all that up into pooled objects, yada yada that could be avoided by understanding and working best practices at the earliest opportunity. Refactoring code can be along, complicated process so you're better off avoiding it.
     
  23. SoftwareGeezers

    SoftwareGeezers

    Joined:
    Jun 22, 2013
    Posts:
    902
    This isn't about whether you need to optimise or not, but when. If you identify early on that you'll have a slow system, it doesn't hurt to optimise early to get a faster option as opposed to rolling out the slow method and then having to refactor.

    Acton's point is about when in the development cycle you need to be considering your performance, and that's not when the game is completed and running 17 fps and you need to make it faster. ;)
     
  24. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,070
    I agree with this.
    And I agree with this.
    Yet people can't agree.

    I think the fallacy stems from the fact that "optimization" is not the appropriate word. Optimization should always come after the necessary solution has been made. If you're optimizing too early, what are you optimizing? That's not an optimization, that's a design concept that hinges on some assumptions of the underlying system ("This will run slow if I don't do this", "This will bite me in the back later"). An optimization makes sure something that was proven to be suboptimal now works better ("This will run better if I change this", "This bit me, and needs to be changed").

    So on the emotional scale at least, I'd argue that regular optimizations come from 'disappointment' and 'courage' while the "premature" ones come from 'fear' and 'insecurity'. If such fears come from experience and good measures come out of it, we consider that to be a robust design, and maybe there is even an extensive pattern or composition that solidified in the industry. And if the fears were based on misunderstanding I'd insist that they weren't premature, but just immature.

    So there are many nuances to this topic, which is why people can't seem to agree, even though programming is quite a concrete science. The term 'premature optimization' is an archaism that does more harm than good, in my view.
     
    SoftwareGeezers and Antypodish like this.
  25. SoftwareGeezers

    SoftwareGeezers

    Joined:
    Jun 22, 2013
    Posts:
    902
    Yes, it likely just needs to die as a term and be replaced with more accurate terms for performance-code development.
     
    orionsyndrome likes this.
  26. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,070
    Honestly people were always aiming for performance, historically even moreso than today. It's just that with contemporary hardware you a) cannot possibly fathom all the "encapsulated" complexities, and most people b) can get away with mediocre solutions most of the time. I still remember Chris Sawyer making both Transport Tycoons, and Rollercoaster Tycoon nearly entirely in pure assembly. Almost nobody wants to talk about this feat of human engineering, and not only he was able to ship, all of his games were a major market success and are influential even today.

    Not to mention that John Carmack himself was basically a king of premature optimization.

    The term "premature optimization" was coined up somewhere in the transitive period (2000-2010), where we had old-school programmers caught up with the modern paradigms and APIs, typically in C++. They were still thinking in the low-level ways of yesterday, however in modern complex environments it's nearly impossible to get anything done with that mindset. And as I said, that was the big adage for the middle management to streamline the development process, and reduce the costs of development.

    The hidden message is basically to fire or disempower anyone who exhibits the desire to think about engineering before acting on the design book and implement-implement-implement in an agile sprint-like manner. A decade later, a completely different set of advices would try and calm the situation down because we were collectively threading down a very dangerous path. Hence Mike Acton in your post saying what he says, but there are many more people standing up against the tyranny of "prescribed programming" and insane management completely taking over, as if that kind of thing is good for coming up with capable products.

    Capable products are made by smart people, and that's a catch 22, because you can tell they're smart only after you see their field results, so if they go after any kind of dev process, premature or not, who the hell can tell it's premature if not them? And there is no management policy that can help with that. The best they can hope for is to weed out the C players in large teams, but that's a different problem altogether.
     
    SoftwareGeezers likes this.
  27. SoftwareGeezers

    SoftwareGeezers

    Joined:
    Jun 22, 2013
    Posts:
    902
    Truth is the issue isn't premature optimisation but unnecessary. Once you have necessary optimisation, the sooner you get it in, the better. And necessary optimisation is that proven to matter, which may be obvious as an algorithm early on, or only come up in profiling later on.

    The above criticisms of the OP being 'premature optimisation' is then, in truth, an issue of being unnecessary optimisation where there isn't good evidence to need to know the performance difference, especially when the behaviour changes based on conversion strategy.

    (As an aside, I never knew the cast truncated so I learnt something valuable!)
     
  28. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,070
    True, but then you're just moving from one mode of categorization to another. Who is to judge on what is necessary and what isn't if that's not the dev/engineer? If it's obviously unnecessary then it's unnecessary, but what stops anyone from judging that nearly all meaningful work by a dev is unnecessary?

    If we judge by time spent or by features unimplemented, and that's all a non-dev can evaluate, then indeed, nearly all meaningful work is unnecessary. That algorithm which makes things run in O(logN)? Unnecessary. That graphical system that automates the Z order? Unnecessary. That editor which would help us save time in the long run? Unnecessary. That gloomy programmer with a good track record, who is constantly negative when he's told to start working on something that shows instead of always fiddling around with that invisible stuff? Unnecessary, better to have someone aloof but going along.

    That's quite a dangerous path, and exactly what happened in the industry. The issue, frankly, lies in the fact that non-devs are after the time and money, and it's extremely hard to delineate when the programmer has struck a dead zone but does an important work, and when he's just slacking but pretending that the results are "just around the corner".

    Therefore, a policy.

    Aside: All downcasts truncate to whatever they can hold. It is logically mandated for the truncating behavior to be explicit. Upcasting and compatible casting is allowed to be implicit! Unity doesn't actually obey this when you downcast from Vector3 to Vector2, because someone declared that cast as implicit. Btw the Truncate method is only useful when you actually want to keep the decimal point type.