Search Unity

General research on computer performance

Discussion in 'Editor & General Support' started by SoraK05, Jul 3, 2014.

  1. SoraK05

    SoraK05

    Joined:
    Jul 3, 2014
    Posts:
    3
    I wrote this e-mail and was suggested to post on the forum.
    The research should give a general overview on an expectation for performance (for gaming)


    Hello.
    I decided to do some general research on computer efficiency and in general
    also applicable to gaming, for an overview on the general calculation
    requirements for gaming in general and alternate techniques and methods.
    I have all the research results here for potentially assisting in allowing
    for more optimized results, including a generally optimal approach (for
    space/time/voltage).

    I decided to send this e-mail and link to the research in the zip (as PDF)
    to some companies/individuals involved in gaming to perhaps assist in a
    general research on computer and gaming optimization, and for assistance
    also with the higher resolution expectations (and effects) on lower spec
    computers.

    https://www.mediafire.com/?wgx3t0g05cxwb2j

    Thanks.
     
  2. SoraK05

    SoraK05

    Joined:
    Jul 3, 2014
    Posts:
    3
    This is updated with some more detail in another pdf inside to demonstrate the general methods.

    More can be done for research on specific optimization of polygons (6 number counts, using 3 bits for a form of polygon/animation data form at a minimum for possible angles, and optimizing from possible angles e.g.) / effects and other elements, however part of the writeup has some expectation for pre-rendering and as such is not as relevant as existing tools can do similar things to reach that result.



    EDIT:
    The idea is there is proportion between CPU requirement and storage. You can have minimal CPU and max storage (all raw final stream form, 'preprocessed'), or highest CPU and least storage (highest compression possible for data type, 'real-time').
    Where preprocessing is done it increases storage requirement (which is more available) and lowers CPU requirement. There is also 'general string compression' outside of data specific arrangement which can reduce a size where there is a lot of repetition that will occur with preprocessing of data to achieve a form of balance of CPU and storage.

    It is possible to preprocess an entire game and its assets, heavily compress (not like 7z, but much more comprehensive and in the writeup) and have all data extracted and streamed with processing than real time calculations in 3D as well as apply all forms of effects (lighting, physics possibility animations and so on) in the preprocessing, having a full 60fps 1080p game without a GPU and a relatively lower clock CPU and little RAM (if any at all). This will require faster speeds like USB 3.0, and extra storage proportionately (however within reasonable limit). This will allow an intense advance dev process and the result will be light and use little power for something portable and allow for 'full graphics', and on a lower CPU/RAM hardware.


    It is suggested to give more load to storage and reduce CPU load in general considering higher CPU is more power overall and storage is available (as well as faster access for that). A custom CPU and a preprocessed method + compression can in general, with enough storage, have 4K resolution at 60FPS with effects today, where the highest end computer struggles with 30fps on a game without very heavy effects. It can allow for 'any 1080p game possible' with all effects preprocessed (of movement/colour) :) It also allows for instant loading since nothing will be placed in RAM, and will all be decoding+streaming.

    I didn't mention optimizing real time addressing (like a polygon/voxel) since modern tools can be adjusted to preprocess, however when I get round I can include it. This and some preprocessing can achieve more balance for higher clock and reduce some storage and still be able to have 'any 1080p game possible'.


    One more thing to add to the writeup for now is that for offset addressing to preprocessed data it can offset chunks of it and the engine can determine which chunks from the data to take, increasing the file a little for the addressing for lower resolutions using the same data and streaming only the required data as well as areas where parts are used instead of using the whole data (including specific offsetting where applicable in the game and camera possibility).

    No plan on multiple posts, only clarifying and will update some general research for real time calculating in future.
     

    Attached Files:

    Last edited: Jul 9, 2014
  3. SoraK05

    SoraK05

    Joined:
    Jul 3, 2014
    Posts:
    3
    I've done some more writing and while I don't have specific research on polygons and optimizing them to use fewer and consider placement of texture data, and being specific to structure of each angle for the polygons, I have another optimization suggestion.

    As rotation in general is considered high on the intensive processing list so that a GPU is available in mind to accommodate this and address identification, one can make the process of using polygons and a texture to locate data to create an image for a frame less intensive by creating multiple textures and pre-rotating these textures and have polygon data to suit the pre-rotated textures. This includes a similar approach to dynamic lighting / shadows where rotation is used and other effects, where some data can be prerotated as opposed to a method mentioned in previous posts regarding preprocessing all angles and compressing them together for their repetition.


    Whether a single texture is duplicated and rotated in RAM and be in sync with polygon data to suit the range/angles and use corresponding texture for less processing or these textures are done in advance and compressed then decompressed on-the-fly for the appropriate texture (recommended for less space and also for compensation of cutting down rotation calculation by using multiple versions pre-rotated), this will cut down resources required and allow for more effects, and is suggested if there is extra RAM on the GPU available for that particular game scene for a lower GPU clock requirement.

    Also, in general, using polygons to suit a specific resolution and multiple sets corresponding to multiple resolutions is recommended in order to be quicker in access to render an image for the frame as well as what is more necessary for the dimensions of the frame. The texture image doesn't necessarily need to be resized for this or have multiple versions as a lower polygon data entry can look at specific pixels from the texture as it is and suit less total calculations/entries proportional to the dimensions it is meant for.


    I also wrote this recently for any interested, as a basic computer tech writing, 19 pages.
     

    Attached Files: