Search Unity

Optimization... How does it work?

Discussion in 'General Discussion' started by Pandantly, Sep 25, 2015.

  1. Pandantly

    Pandantly

    Joined:
    Sep 23, 2015
    Posts:
    12
    As I am rather new to the world of developing video games, I always tend to come across patches in beta/alpha games that read Increased Optimization or Performance Update or something similar to that. I have a decent understanding of what game optimization means (increasing performance/FPS), but what I don't know about is the process developers go through to optimize their game.

    Although each game is different, in general, how does a developer go about optimizing his or her game? What are some of the most commonly used methods of increasing performance?

    Any comments are welcome! Thanks!
     
  2. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    Look at the profiler and try to improve parts that use the most time.

    --Eric
     
    schmosef, Ony, Kiwasi and 2 others like this.
  3. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,052
    What Eric said. Find your bottlenecks and improve them. Optimization is highly dependent on the game itself.
     
    Kiwasi and Ony like this.
  4. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,546
    1) Cache transforms
    2) Make sure all moving colliders have a rigid body
    3) Turn of colliders when you can. If an object only collides with the player, only enable it when it's in range of the player.
    4) Never use mesh colliders or default 2d polygon colliders. They are always overkill.
    5) Tweak the Fixed Update in the time options.
    6) Use object pools instead of instantiate or destroy. They're not that hard and can make a HUGE difference.
    7) Culling.
    8) LOD.
    9) Watch out for loops.
    10) Don't do a function everyframe if you don't need to.
    11) Asynchronous loading. (loading on a seperate CPU core, rather than stopping the game until something is loaded causing a "chug").
     
    Rodolfo-Rubens likes this.
  5. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    1) Only if it actually would make a difference; in most cases it won't.
    2) Yes.
    3) Maybe; detecting range and disabling/enabling has overhead. Profile first, since it could make things worse.
    4) No, mesh colliders are typically used for static background geometry and can't really be replaced with primitive colliders. Same for polygon colliders, but even more so since the box collider, for example, is really a polygon collider in the shape of a box. Also convex mesh colliders are relatively fast and are reasonable to use when it's not feasible to use primitive colliders. Careful about using the words "never" and "always".
    5) Maybe, if you're using physics.
    6) Yes.
    7) Automatic, unless you're talking about occlusion culling, in which case potentially yes, if your scene would actually benefit.
    8) Maybe; profile first, and it depends on whether you're GPU- or CPU-bound. GPUs are fast at drawing lots of polygons so the overhead of LOD might outweigh just drawing stuff as-is.
    9-11) Yes.

    --Eric
     
    hippocoder and Deleted User like this.
  6. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,546
    Doing a distance check and comparing it to a float seems much cheaper than checking a collider and you could do it every 10 frames, or whatever. The only exception would be if it's a sphere collider since they're essentially distance checks anyhow.

    At any rate, I did this for my project and tripled my FPS. I've also recommended it to others and they say it helped them too.

    Oh, I would make my own mesh rather than a primitive if need be. I meant in context of, say, a 10k poly mesh such as a car.

    And I'm not sure what you mean by "static background geometry"? Do you mean objects that don't move, such as cliff faces or buildings?

    EDIT: I just re-read that and I sound really arogant. For the record Eric knows WAY more than me and I meant to be inquisitive, not chsllanging what he said.
     
    Last edited: Sep 25, 2015
  7. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Sometimes. PhysX uses some very clever tricks to reduce the cost of collider checking. And turning colliders on and of is not free.
     
  8. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,546
    Really? I thought that that would be really cheap.
     
  9. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    My tips, in approximate order. Except for number one. Always profile first, last and in between. Each step gets more difficult, so you want to do the later steps on as little code as possible.

    Note these tips are around code optimisation, and only make sense if that is your bottle neck.
    1. Profile
    2. Fix any obvious issues (methods called thousands of times a frame, methods allocating MB of garbage, single gram spikes)
    3. Profile
    4. Optimise your overall systems by junking anything thy doesn't need to be in the game
    5. Profile
    6. Optimise expensive systems. (Reduce the tick frequency, change algorithms)
    7. Profile
    8. Optimise expensive algorithms. (Split processing over multiple frames, optimise data structures)
    9. Profile
    10. Optimise code (Remove expensive calls, inline functions, general micro optimisations)
    11. Profile
    12. Thread whatever is left (threading often doesn't increase performance, but it can make the frame rate appear smoother)
    13. Profile
    Combine those tips with general not being dumb when you code and you should be fine.

    Tips to not be dumb:
    • Cache components
    • Don't throw away big data structures
    • Don't do stuff in Update that doesn't need to be done every frame
    • Think twice about frequently calling any method that's called out for performance in the docs.
     
  10. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    It really depends on the exact set up. That's why profiling is so important. Distance checks can get expensive if you have a lot of objects and they are moving fast.

    But it's not a bad thing to try, just make sure it works. As @Eric5h5 mentioned, always and never are dangerous. Unity makes so many different types of games that there is seldom an optimisation strategy that works on all of them.

    For an example check out the Unite video on Mushroom 11. Object pooling primitive colliders of various different sizes won't make a difference to most games. But it was essential for Mushroom 11.
     
    Not_Sure likes this.
  11. Velo222

    Velo222

    Joined:
    Apr 29, 2012
    Posts:
    1,437
    Practically speaking it's what people like Not_Sure, BoredMormon, and Eric5h5 are suggesting in terms of actual code and engine setup. Conceptually I like to think of it as when you're signing your signature in cursive -- then when you're done most people go back and cross their T's and dot their I's. Optimization is basically like going back and crossing your T's and dotting your I's. In general, it's usually about making your code cleaner, and your game run faster.
     
    Ony likes this.
  12. Kasko

    Kasko

    Joined:
    Mar 21, 2014
    Posts:
    72
  13. MurDocINC

    MurDocINC

    Joined:
    Apr 12, 2013
    Posts:
    265
    Use atlas textures and keep uniform scale so unity can actively batch to less draw calls.
     
    BrandyStarbrite, Ony and Not_Sure like this.
  14. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,546
    Yup! Forgot that one.
     
  15. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,052
    I don't know that I would call those "optimizations", as you should be doing most of that stuff in the first place. They are just good practices.

    ^ This is optimization.

    In addition, you can go back to assets/content/art to optimize, especially if spec has has changed since content was created. Sometimes it is easy to "over" create on the content during the early days. Going back and making sure that you don't have extra geo/pixels. High frequency of keys in animations, unused animations, overly large textures, etc.
     
    Not_Sure, Ony and Kiwasi like this.
  16. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Thanks! Glad I got it right. Most of my optimisation comes from large scale chemical factories rather then code. In fact my current job is entirely about optimisation.

    There is a term used a lot in process optimisation called 'low hanging fruit'. It's basically a reminder to focus on the easiest optimisations first. No point making a unit op 5% faster if you can just skip it altogether.

    Understanding bottlenecks is also important. Improving a constrained unit op by 3% will give you more of a production boost then doubling the capacity of an under utilised asset.

    Bottlenecks are also notorious for moving as you move through the optimisation process. Fortunately the profiling games in unity is cheap, you can profile after every optimisation step. I wish I could run a full scale profile of my factory in anything less then a month.
     
    Ryiah and Ony like this.
  17. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    If your deep profiling picks up lots of Vector operations or methods that use them you can unroll these into inline calculations e.g.

    Code (CSharp):
    1. VectorA = VectorB + VectorC;
    Triggers a function call and can be unrolled to:

    Code (CSharp):
    1. VectorA.x = VectorB.x + VectorC.x;
    2. VectorA.y = VectorB.y + VectorC.y;
    3. VectorA.z = VectorB.z + VectorC.z; // not needed for 2D
    Note that a Vector, Matrix, Quaternion operations can all be candidates for this type of optimisation.

    But also profile on the target hardware to compare performance, as at least WebGL is due to be enhanced with SIMD.js (Single Input Multiple Data) which could be used to speed up these operations within Unity.
     
    ZJP and Kiwasi like this.
  18. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    This is one micro optimisation that can be super powerful in the right situations. It does make your code harder to read and maintain. I've used it a couple of times to squeeze more juice out of CFD.

    But like most other micro optimisations if you aren't doing a crazy heavy amount if calls it can be a waste of time.
     
  19. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Also remember that the profiler slows the code down and adds overhead, so you can get false positives (or should that be negatives) ensure you check the Self column vs the Time column. The Self column is the overhead the profiler adds.

    Also make sure you use Deep and Normal Profiling as the Normal has less Self overhead and gives a better indicator of the problem areas. Deep allows you to dig further and helps you work out what is causing the bottleneck.
     
  20. BornGodsGame

    BornGodsGame

    Joined:
    Jun 28, 2014
    Posts:
    587
    There are generally two categories of optimizations
    1. Avoiding the really bad things while building your game. There are some big, obvious no-no´s that you need to know and they should never make it into your game in the first place. For scripting this is things like putting unnecessary stuff in the update function or using findObject when there are better choices. For worldbuilding, stuff like not having a rigidbody on things you plan to move, havng unnecessary light sources etc.

    2. Profiling and finding problems during testing or near the end of your project - Maybe you have an area of your map that has FPS issues..

    While you are building your game, you should always keep all these things in mind, but for the #2 issues, they are usually just small tweaks that you do not have to worry about UNTIL you have a problem. There is no need to micro-optimize parts of your game that do not cause issues while you are building the game. Wait til closer to the end to see if there is even a problem.

    I´ll give you an example. Everyone says character models should be 10k tris or less as a ballpark figure. Let´s say I have a really awesome creature I want to put into my game that has 100k tris.. way more than what is recommended. So by most people´s thinking I should probably have an artist redo the model and reduce the tris.. and I should probably have LODs... right? Well what happens if in my game, that creature is inside of a big empty cave with only 1 light source, few particle effects and not much really going on. When I play my game with the 100k tri creature, I am easily getting 100fps while in that cave... and so if I had optimized, all I would have done is wasted time and money. i would have fixed a problem that didn´t exist.

    As people have mentioned in this thread, there is usually some performance costs involved in some optimizations. In a way it is two steps forward, one step back. Plus there is just the issue of your time, optimization takes time... so don´t bother doing it unless you have a problem. I see a lot of people that do optimization stuff that isn´t needed... preventive medicine if you will.. but that costs time and it is generally the least fun parts of making a game.

    Most important. Learn the big no-no and never let them to get into your game. Then make your ga
     
    Ryiah, Deleted User and Kiwasi like this.
  21. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Another optimizational issue in Unity is it's garbage collection, you have no control over this sub-system but it can kick in at any time triggering performance stalls. But you can look out for memory allocations in the profiler and try to reduce them.

    Also as mentioned previously you can recycle frequently used but disposable objects, e.g. enemies, bullets, effects using a pool to store unused elements and recycle them when needed. This saves on the creation/instantiation cost and the memory releases when you destroy elements.
     
    Not_Sure and Kiwasi like this.
  22. Deleted User

    Deleted User

    Guest

    Got to admit for 3D (PC and console scaling) a lot has been missed here, I could write books on this but a short sample and this isn't engine specific either:

    • Efficient level design, maximising occlusion culling by modular blocking. The more you see, the more you render.
    • Material complexity and types of rendering, if using a DR Emil Persson's implementation of layered DPG then G-buffer size can be huge (Yes I know multi-pass). Interlacing has too many drawbacks and using a DR and FR mix can cause issues when discrepancies occur between render pipelines. These limitations also apply to TDR, so we have some other techniques FR+ which is rendered to a depth buffer and then uses the same tiled culling system as TDR. Good up to 2048 lights (I believe) and doesn't suffer the same issues with transparency or MSAA as DR techniques do.. There's clustered DR which uses hierarchical culling based on an octree cell system, don't know much about it for the moment although very interesting.! Point, choose your rendering solution well.!
    • Don't always assume given systems are the best solution, make good use of both instancing and batching as an example.
    • Following on from the above, pick your API. With the release of new API's like DX12 draw calls may become less of an issue. Although I still wouldn't recommend being sloppy :).
    • Lighting, is probably one of THE major causes of performance issues. I include shadows in this proponent, don't be afraid to look into other techniques. CSM's (Cascade shadow maps) are invariably heavy, lightmapping is generally a better solution if your scene remains static. Apart from occlusion culling, this is where you'll find most gains. Although in large dynamic worlds this can become counter intuitive as you may accumulate a massive amount of lightmaps to process. So there are other dynamic lighting and shadows techniques as well as GI solutions like Ray Traced distance shadows for one, it may not always be a beautiful solution but if completely dark shadows ain't your thing IBL and LPV might be what you need.
    • Furthering on with Material complexity, try to keep the amount of transcendental math functions down and specify precision of FPO's.
    • Understand the profile, finding core processes which may affect "wait" times for e.g. on GPU's (Memory, Triangles / Vertex / pixel processing) and CPU processes (too many DC's transfer, shadow casters, object visibility count etc.) is key. That's a whole topic within itself..
    • Watch out for hiccups in texture streaming / async loading meshes. It can make a scene appear slower than it actually is.
    • Post processing, especially things like SSR (if you use it). Don't forget to cycle and test.
    • Use texture compression where you can along with MIP's.
    • Research tips and tricks like using skyboxes for long distance geometry, you could render long extents to a scene capture.

    Edit: @Carve_Online got to agree with that, don't optimise for the sake of optimising. You're wasting time and can increase production times dramatically, if performance becomes an issue then put the additional work in to fix it (for your desired min specs).
     
    Last edited by a moderator: Sep 26, 2015
    Ryiah and Kiwasi like this.
  23. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    This very much illustrates my point earlier. Optimisations depend very much on the game. Your tips would be a waste of time in my 2D simulation heavy games. Specific techniques I use would be useless in your graphics heavy, open world, 3D type game.

    (The general hints about profiling and early optimisation are useful everywhere, but specifics vary widely).
     
    Ryiah and Deleted User like this.
  24. jpthek9

    jpthek9

    Joined:
    Nov 28, 2013
    Posts:
    944
    Agreed. Find the bottleneck first.
     
    Ony and Kiwasi like this.
  25. Deleted User

    Deleted User

    Guest

    Exactly, it's highly platform / type / device specific.. Like for example mobile vs. PC, some recommendations I came across said less than 700 DC's and less than 500K triangles.

    For my min specs, times the triangles by about 20 and then were starting to boogie. The only LOD's I even slightly care about is foliage and the kicker ain't triangles, it's G-buffer cost of alpha cut out instanced thousands of times. Take the leaves off the trees and mad performance ensues, even without LOD's...

    But anyway, might come in handy for people :)..
     
    Ryiah and Kiwasi like this.
  26. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    A bit more meta but you might find it useful to look at data driven development. The idea is to batch and sort your data before quickly processing it in a tight memory block (e.g. Array / List) the aim is to have minimal conditional/branching logic.

    This works well with how CPU's actually operate and is recommended by most processor manufacturers.

    Note that to use this approach you are going to have to design your code and data structures to work in this fashion. So you could have Manager type classes that work over multiple game entities, sorting and processing their data in a data driven manner.

    As opposed to the component/object style of programming you tend to fall into using Unity.

    Check online there is lots on this style of programming.
     
  27. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    I looked into DOD for Pond Wars 3D. The computational complexity of the game increases by a couple of orders of magnitude when you add in the extra dimension. Hence it was hammering the CPU on physics.

    Like most optimisations, DOD isn't an all or nothing. The waves simulation would have used it (probably hidden away in its own thread). The boats and cannon fire would have worked on a more tradition component based architecture.
     
  28. JohnnyA

    JohnnyA

    Joined:
    Apr 9, 2010
    Posts:
    5,041
    Optimisation is like ejaculation, youngsters are always doing it prematurely :)

    Edit: (Hope thats not too ... ahhh ... 'colourful' for the forums)
     
    Dantus, BornGodsGame and Kiwasi like this.
  29. Pandantly

    Pandantly

    Joined:
    Sep 23, 2015
    Posts:
    12
    Thanks you guys for all your responses! I just wanted to understand how it worked, and now I've got my answers! Thanks so much!
     
    Kiwasi likes this.
  30. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Oh please that metaphor is so inaccurate, with Unity's profiler you can start optimising as soon as you have a playable prototype.
     
  31. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Youngster.
     
  32. JohnnyA

    JohnnyA

    Joined:
    Apr 9, 2010
    Posts:
    5,041
    a) It was a joke.

    But ...

    b) the whole point is that you probably shouldn't do that. You should optimise when it is needed, not when its possible. This applies both to what you optimise (why optimise CPU if you are GPU bound), but also when you optimise. Whats the point of optimising a system in your first playable prototype? There's a good chance it will be scrapped or rewritten anyway.

    Of course some optimisations are truly beneficial, knowing which is which takes experience, which gives the grain of truth to the joke.

    * This is from the perspective of "get S*** done", as an academic activity many find optimisation very enjoyable.


    "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"
    - Donald Knuth
     
  33. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I find it good to run the profiler just to help find and remove the debug code I add as I go.
     
  34. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,052
    That's a simple optimIzation. Set your game in a desert. ;)
     
    Ryiah, Kiwasi and Deleted User like this.
  35. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    I believe that was step 4.

    I typically just do find in files if I miss a Debug.Log. My general habit is to removed debug code as soon as the debugging is done. Otherwise it just clutters up the next debugging attempt.
     
    zombiegorilla likes this.
  36. JohnnyA

    JohnnyA

    Joined:
    Apr 9, 2010
    Posts:
    5,041
    For sure, its an art not a science :)
     
  37. Deleted User

    Deleted User

    Guest

    Shhhh! We can't let people know.. Or the fabric of time might shatter.!

    But seriously some of THE sloppiest code and art have come from indies who became millionaires / AAA.

    Sure game breaking bugs / crashes aren't acceptable and you're not fooling anyone by using a cube to represent a rock.... No, even if you put a rock texture on it, it's still a cube (no a sphere is still a sphere too :))..

    If you can meet your min specs, spending time improving your game as a whole is always the best course of action. Nobody will see how pretty your code is in a closed game / nobody cares in the slightest as long as it runs well and does what it's supposed to.

    Time is always against you, focus on the right things.
     
    Last edited by a moderator: Sep 27, 2015
    aer0ace, Ryiah and Kiwasi like this.
  38. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,035
    And that's why I always see simple games lagging even on my Iphone 6, while they should be able to run on an iphone 4 flawlessly. You can improve some complex algorithm later, but you should always write optimized code, 99% of the time you won't go back to it unless you are forced to and many little unoptimized codes add up fast.


    Oh god, I'm starting to be happy I'm an optimization maniac that spend a lot of time planning in advance even the simplest things.
     
  39. BornGodsGame

    BornGodsGame

    Joined:
    Jun 28, 2014
    Posts:
    587
    Unless the problems are affecting your play-testing, why bother? For anything except very quick games, more than likely you are going to be fussing with stuff for a long time. You could use the profile 25% of the way through a project, see a problem, spend time fixing that problem, and then at the 50% point, do something else that would have fixed that problem anyway.

    Also, the problems you think you have early on, may not be the problems you have in the end. A person does LOD´s for 500 tri rocks... a lot of them... spends more than a day just with LODs for some stupid rocks. Then near the end of his project, he is CPU locked... and one solution is to remove LOD components from unnecessary things.

    Other than something that prevents playtesting, there are few reasons why you should optimize early, and many reasons why you should optimize late. The biggest of which is that optimization takes time.
     
  40. JohnnyA

    JohnnyA

    Joined:
    Apr 9, 2010
    Posts:
    5,041
    I see your point but I don't really think it falls out like that. Companies either care about your battery life and optimisation for lower end devices or they don't. Those that do care can optimise early or late. What I've found is that deferring optimisation until it matters, the 'Lean' approach, is usually significantly cheaper.

    I acknowledge there is some danger that those that are somewhat on the fence might plan to optimise late and then not do it in order to meet deadlines. It's a risk I'm willing to take :)

    Why not? That just seems like a process or discipline problem. Others probably have just as much problem with the discipline of always writing optimised code. My aim is to be willing to rewrite any function, class, or system, at any time... but only if there is a real need to do so.

    Optimisation is one of the things I've spent quite a bit of time doing, both as a freelancer in Unity and in my day job architecting and developing high performance transactional systems. Small incremental issues have seldom been a problem I have run in to.

    By the way I'm not saying you shouldn't adopt simple techniques and best-practices that have little cost associated with them but at the very least you need to understand why you are doing something and what the real costs and benefits are.

    Anyway thats my thoughts, none of this is black and white. The best approach will surely vary with the people, the project, the goals, time, etc.
     
    Ryiah and zombiegorilla like this.
  41. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Pond Wars was like this. The game ran on target platforms at the desired frame rate. So I never did any performance optimization on it. Why should I bother?

    I did end up doing a bunch of stuff to optimize developer time, like writing editor scripts to auto hook up the spring joints. Hooking up 200 spring joints via hand would have made my game load a couple of milliseconds faster. But that would have cost me hours of development time. More if I ever tweaked parameters.
     
  42. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    Well, you can make it use less battery. With mobile, it's not just about running fast enough. People tend to be unhappy when games drain their battery in half an hour.

    --Eric
     
  43. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    True.

    I should have mentioned my build target was PC.
     
  44. BornGodsGame

    BornGodsGame

    Joined:
    Jun 28, 2014
    Posts:
    587
    I think the disconnect between the two sides in this cool debate is this statement ´ not doing stupid things is not optimization´(in my opinion). Removing FindObject from update functions is not optimization... it is just fixing a stupid mistake. There are things you should never do, so not doing them isn´t really optimization :)
     
    zombiegorilla likes this.
  45. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Are the premature optimisers releasing anything? if not perhaps they should rethink.
     
  46. BornGodsGame

    BornGodsGame

    Joined:
    Jun 28, 2014
    Posts:
    587
    I do 5 LODs for the cubes I use for prototyping.

    But I swear, it never happend to me before....
     
  47. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,657
    I broadly agree with the people saying that you shouldn't optimise until you have a problem, however, a lot of people also do not look for problems at the right time... they wait 'until the game is slow' instead of looking to see if individual components of the game are slow.

    If you start out by implementing rendering, and get 80FPS, you might think that's OK because it's well above your target framerate of 60FPS. But 80FPS means you're spending ~12ms just on rendering, and a target framerate of 60FPS means you have a total of 16.6ms/frame to spend - is it really going to be possible to squeeze everything else you haven't written yet into ~4ms?

    Pick a target framerate when you start, convert it to a ms/frame value, and then break that value down into a budget for each major subsystem - rendering, physics, AI, player movement, etc. Then, regularly pull up the profiler and check how you're doing against those numbers. It's like a performance take on unit testing.

    As soon as you're blowing your frame-time budgets, optimisation is no longer premature.
     
    Kiwasi, Ryiah, Arowx and 1 other person like this.
  48. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Can we quantify what we mean by premature optimizers as I think people may have different concepts for the same turn of phrase.

    For me it implies someone who does not use the profiler correctly or writes code that is more complex to maintain than it needs to be. In effect wasting valuable time where it is not needed.

    On the other hand who does not use the profiler when working on a project as it helps highlight frame rate and adds a load to the game so if you can run the game in Unity with the profiler going and hit your target frame rate it's going well.

    Mind you I sometimes try and get my games to run at the target frame rate in deep profile mode, just to ensure they run well on lower end hardware.
     
  49. BornGodsGame

    BornGodsGame

    Joined:
    Jun 28, 2014
    Posts:
    587
    premature optimizer - Someone who spends time optimizing something that at launch of the game, wouldn´t have mattered, either because it is no longer in the game, or because another part of your game is the bottleneck and would have been even without your optimization.
     
    Kiwasi and hippocoder like this.
  50. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Yes. Optimisation is OK - I do it when it matters as I have a lot of experience. But I wouldn't bother optimising things that in theory could be slow. Only if they're having a party on my budget. It's something you get a feel for but above all finishing something does become much harder when you start slipping into a mindset where everything needs to be perfect or optimised. This is my definition of premature optimisation (it's a waste of your time).