Search Unity

Unity UI What is better for consoles/huds an in world canvas or renderer to texture?

Discussion in 'UGUI & TextMesh Pro' started by Arowx, Apr 16, 2017.

  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    So I'm working on a game with a console display and I initially was thinking of using a world space UI but would a render to texture be more performant.

    • With a render to world space, the Canvas will continually be getting draw calls.
    • With a render to texture if the information is static or lower than framerate updates you could just trigger a render when the UI is updated, although any pointer UI interaction becomes more complex.
    So what approach do you take and why?
     
  2. Selaphiel

    Selaphiel

    Joined:
    Jan 31, 2014
    Posts:
    23
    I am working in VR, so i have to use world canvas as there are interaction and animation of interaction also the UI is curved which changes with camera angles, doing all this with texture and keep on updating it will prove performance heavy i think. On other hand i am facing this texture/text shimmering effect which would go away if i just use a render texture and update it time to time.
     
  3. SiliconDroid

    SiliconDroid

    Joined:
    Feb 20, 2017
    Posts:
    302
    I recently wrote my own render to texture GUI library using pooled texture brushes, whole GUI (3 screens) exists in one 512^2 read/write enabled non compressed RGBA texture, no mipmaps and use bilinear filtering:


    Single material/texture being 1 draw call is most performant, certainly for mobile where draw calls can seriously mame or even KILL!

    Of course when texture is static it flies as fast as any textured mesh would, but I can also refresh the whole texture every frame even on mobile VR if needed, but of course one only needs update parts of the texture anytime GUI state changes. I think under the hood Unity is optimizing GPU blit very well.

    Obviously don't go calling setPixel willy nilly to build controls during game flow, have everything pre-drawn during init into brushes. I actually just use one 256^2 sprite atlas for the entire GUI split into 256 brushes, it's the old IBM 437 ExtendedASCII set with a few tweaks for rounded button corners.

    Pointer interaction is done by transforming screen collider hitpoint into texture UV space using the handy Unity API: RaycastHit.textureCoord; this function made the job 'easy' even when GUI is mapped over non planar mesh.

    But of course you're gonna have to write yourself a texture based GUI library. C#s reflection capabilities help in making a general solution, for example I am creating controls like below where I pass in C# object reference and string name of handler function that I want a control to callback, it allows for a nicely encapsulated GUI lib for using simply and cleanly from game code. An example of creating a GUI screen with my lib is:
    Code (CSharp):
    1. b.cGuiRoot.o.oControls.CreateStart(lib_gui_root.SCREEN.MIDDLE);
    2.  
    3. b.cGuiRoot.o.oControls.CreateTitle("GRAPHICS SETTINGS");
    4. b.cGuiRoot.o.oControls.CreateBoolean("COCKPIT GLASS ", 1, 2, main.v.cMechPlayer.GetCockpitGlass(), this, "OnGui_CockpitGlass");
    5. b.cGuiRoot.o.oControls.CreateBoolean("ATMOSPHERE FOG", 1, 5, RenderSettings.fog, this, "OnGui_Fog");
    6. b.cGuiRoot.o.oControls.CreateButton("EXIT 2 MAIN MENU", 1, 8, this, "OnGui_ExitToMainMenu");
    7. b.cGuiRoot.o.oControls.CreateParagraph("GFX_INFO", "GRAPHICS INFO HERE...", 30, 1, 13);
    8.  
    9. b.cGuiRoot.o.oControls.CreateEnd();

    So yeah... I found it worthwhile to write a little GUI lib (~200kb of C#) for the sake of optimum performance in mobile VR.
     
    Last edited: Apr 20, 2017