Search Unity

Ideas for an improved design of the native Unity VR Integration.

Discussion in 'AR/VR (XR) Discussion' started by jashan, Jul 22, 2015.

  1. jashan

    jashan

    Joined:
    Mar 9, 2007
    Posts:
    3,307
    Unity Feedback Entry for this

    The reason I'm posting this is because I really love Unity, I really love UT, and I really love VR. And I totally appreciate that Unity embraces VR. Unfortunately, the way Unity embraces VR currently really has some limitations and in my opinion breaks some of the "traditional ways" of doing stuff in Unity - and that's something I am very concerned about. It's not only small issues but in some parts of it, a completely wrong direction (from my point of view). While in other areas, it's actually a totally awesome direction, of course.

    During Unity 5.1 alpha, I posted similar things but after working for a while with Unity native VR and SteamVR, things became much clearer to me ... and given that 5.2 should bring Sony Project Morpheus support (which is a target platform I'm very interested in), I feel it's kind of urgent to speak up again.

    As far as I can see, the problem is actually very simple: Someone apparently said "make Unity VR something that's just a tick of a checkbox". From a marketing perspective, this sure sounds quite awesome but from a VR developers point of view, it's ... well ... let's say "weird" ;-)

    Developing for VR means finding solutions to hard problems that no one has properly solved, yet. It means finding innovative approaches to input, it means finding innovative approaches to locomotion. It means you can't use stock audio ambiences because it'll break immersion (if you're using binaural audio - and if you're not using binaural audio, you're missing half the experience). It means a lot of other things - even stuff like finding new ways of creating gameplay videos. Getting the camera to render properly to the headset was actually solved years ago (admittedly with a few limitations but that's not what this posting is about).

    Don't get me wrong: There's really important "under-the-hood" stuff that I don't want to have to mess with as a Unity developer and I totally appreciate Unity doing these things without anyone even seeing it in the editor: Highly optimised rendering magic, IPD-distance stuff, figuring out the correct FoV for a camera for a specific headset. I think you're doing an awesome job there and I appreciate that work a lot! That's low level stuff - if I was interested in that kind of stuff, I'd either develop a game engine or apply for a job at UT ;-)

    BUT: Assuming that a user wants to have all cameras rendered to the headset, and all cameras automatically motion tracked by the headset is something that the tech people at UT should have immediately stopped. Seriously.

    One of the most beautiful things about Unity is its component-based architecture. A scene is just a bunch of game objects. Add components to those game objects to give them specific functionalities. Like, some game objects could have a Camera component which will handle rendering something to the screen. Each component has properties that give you a way to customise those functionalities. Like, the field of view of the camera, or whether the camera renders to a render texture.

    Making a checkbox in the player settings change the behaviour of every camera may have been kind of appropriate with stereo rendering (but that's a feature I'm not interested in, so I just don't care) - but for VR, it's just a really unfortunate idea with very unpleasant consequences when this is the way you do it (even if we can have RenderTexture-cameras that are "magically not effected" ... but ... maybe I WANT the RenderTexture to use tracking and all that other stuff, do not take that decision away from me, okay!?)

    This is where in my opinion the current design of Unity VR is broken - and I really hope you'll fix it while this is still pretty new.

    A headset is an additional display, very much like Airplay (just more different). If I want the Camera to not render to the game window but some other display, I either want to have an additional component that interacts with the Camera component, or, as we already have a "Render to Texture" property in the Camera component, I want an additional property, like "Render Target". This could be "Render to Texture" (and if that's what I select, I get the RenderTexture slot). The default would be "Game Window".

    For VR, it could be "HMD".

    With an approach like that, the "how should Unity know what to render to the headset" would be solved in a nice and flexible way that's consistent with how things are done in Unity.

    Next thing: Tracking. Of course, a camera that renders to the "headset" really does need tracking. So if I select "Render Target = HMD", I do want Unity to tell me in big red letters: "If you render this camera to HMD, you must add a InputTracking-component or you'll make your player really sick." If you want to be really awesome, add a button "Add InputTracking-component" which does this for me.

    Now, in most VR hardware setups that deserve the name, the headset will not be the only thing that's tracked. There will be controllers, there might be full body tracking. So, an "InputTracking" component must know what to track. If the InputTracking-component is attached to a camera that has "Render Target" set to "HMD", the tracking source will usually be "HMD" as well, so if I set this to anything else, I'd once again appreciate a warning.

    But "a warning" is something significantly different from taking the power and flexibility away from users with no good reason. That's what concerns me. There's also professionals using Unity - if I want total safety and no flexibility, I guess I'd try Game Maker instead.

    There are, of course, some performance concerns one might have. But that's just a short temporary issue. I'm almost certain that in 2017, framerate-panic for VR will be history. But the design for the VR integration should not have to change then. And if I have a machine with 2 high end gfx cards and want to do a fancy VR installation that renders a lot of awesome stuff to a huge screen that has nothing to do with what the VR-player is seeing in his HMD (because it'll actually make people sick to watch that on a big screen when they're not doing those head movements themselves and instead have those movements forced upon them via a big screen), I don't want Unity to get in my way.

    It's pretty awesome that Unity VR has a way to put what is rendered to the HMD to the game window without a noticeable performance impact. One possible way to provide a "Unity style" interface to this feature would be another property "Render Source".The default would be "World", of course - but it could also be any other "Render Target". So for this specific case, I'd select "HMD" as Render Source (which would mean "the stuff that gets displayed inside the headset", maybe with a few options like "distorted two-lens view", "distorted left-eye-view" and "undistorted view", each with different potential performance implications).

    I might also select "Texture" which would give me an easy way to pipe Render Textures into other cameras without having to create a plane that I put that texture on. It could also be "Screen2" if Unity properly supports rendering to multiple screens (which I believe it currently doesn't ... or if it does, it's hidden so Unity devs don't find it ... kind of like Airplay-support, I guess).

    You should talk to the Valve people because their SteamVR Plugin does almost all of this perfectly right. It's fun to use, it allows me to work the way I'm used to working in Unity. It's very close to what Unity native VR hopefully will be in Unity 5.3. It does have a few things I'm not happy about - but that's really minor stuff most of which I could even fix myself by simply changing their code (I can do that because it's not hidden somewhere in the depths of the Unity source code).

    Here's a little video I captured from the game window (and you'll notice that I'm usually only showing what the player sees on the headset in a small area of the screen because I actually find it disorientating, sometimes nauseating to see what's shown in the headset on a monitor - it's just not meant for that target device):


    There's also some pretty interesting combined 2D/VR-GUI stuff that I'll start working on soon which also can easily be achieved with SteamVR but not even considered when using the native Unity VR integration. More on that when I have stuff I can show to illustrate it.
     
    ZenUnity, Omnifinity, DrBlort and 2 others like this.
  2. Thomas-Pasieka

    Thomas-Pasieka

    Joined:
    Sep 19, 2005
    Posts:
    2,174
    Jashan raises some interesting points and I would love to hear what some of the Unity devs are thinking on that matter.
     
    DrBlort and jashan like this.
  3. Pawige

    Pawige

    Joined:
    Nov 26, 2012
    Posts:
    40
    Amen! I find the current implementation of VR far too restrictive at the moment. I'm messing with a blend of optical and gyro tracking, and having no access to the way the tracking is applied to the transform is really killing me. It's very easy in the old Oculus plugin to mess around in the UpdateAnchors method.
     
    jashan and Thomas-Pasieka like this.
  4. Hannibalov

    Hannibalov

    Joined:
    Aug 8, 2012
    Posts:
    6
    Agreed, the current design must be changed. In particular, I totally agree with @Pawige, this magic with respect to tracking doesn't make things easy. If anything, it makes it harder to explore new interactions. Hope they change their mind.
     
    jashan likes this.