Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Vive VR SDK cameras don't render to RenderTexture correctly

Discussion in '5.4 Beta' started by SamBaker, Jul 21, 2016.

  1. SamBaker

    SamBaker

    Joined:
    Jun 13, 2013
    Posts:
    4
    The native Vive/Oculus support looks great in Unity 5.4 - unfortunately there's a bug with the Vive support. I just submitted Issue #816464 regarding this. (info below).

    Since I need to render different content (and camera overlays) to each eye, a fix for this is critical for determining whether we can use Vive. It's not straightforward to do this with the steam VR plugin and its custom camera rendering. If devs find a workaround for this with current versions of Unity 5.4, please post here. My workaround gets part way there but isn't good enough but I imagine there's a better one available that can be applied via script.

    1) Scene description

    My scene has:

    CameraLeft and CameraRight rendering some content to the device using culling masks (native VR support).

    OverlayCameraLeft and OverlayCameraRight are disabled cameras configured to render other content in a different layer. The cameras have a script that attaches them to the InputTracking transforms for VRNode.LeftEye and VRNode.RightEye since they'll render to a render texture which stops the VRNode transforms being applied at render time (or because they're being rendered via script).

    The left and right eye main cameras have a script that:
    - creates a temporary render texture
    - renders the appropriate overlay camera to it using Camera.render
    - combines it with the camera's output in the OverlayShader (simple blend)

    2) Expected behavior

    It should appear more or less as if there is only one pair of cameras with the geometry in the correct position relative to each other. The world in the game view should look as it looks in the scene view.

    3) Actual behavior

    Using Oculus SDK or OpenVR SDK with an Oculus headset works fine, but...

    Using Vive with OpenVR SDK introduces some kind of camera transform to the overlay cameras that are rendered to a render texture. This results in the overlay shapes being offset incorrectly.

    I found a partial fix that applies an additional y rotation to the eye transforms for the overlay cameras. Debug lines in my build show that the two overlay cameras are now diverging significantly but it almost fixes the problem. This additional transform should not be necessary.
     
  2. SamBaker

    SamBaker

    Joined:
    Jun 13, 2013
    Posts:
    4
    [UPDATED] I have more info on this. Previously I thought that camera.Render() was not applying lens distortion but it is. The eye separation may be being applied differently though.

    The display output is generated in OnRenderImage for my main cameras. The problem is that when using camera.Render() to render the overlay the camera position is different to the src input to OnRenderImage. On Oculus, the lens distortion appears to be applied after OnRenderImage and everything works fine. See the attached images. The first is the src parameter to OnRenderImage, the second is the output from camera.Render(), the third is the result of combining them with Graphics.Blit(). You can see that the content is rendered differently for the overlay and the main camera. Using Oculus, this problem does not occur, the 3 images all line up correctly.

    I've tried rendering my left eye overlay camera at the position/orientation of VRNode.LeftEye and at the position/orientation of VRNode.CenterEye but neither works.

    Any help would be appreciated - I'm hoping there's some javascript compensation that can be made as a workaround.
     

    Attached Files:

    Last edited: Jul 22, 2016
  3. SamBaker

    SamBaker

    Joined:
    Jun 13, 2013
    Posts:
    4
    Last comment for now, I've tried everything - with the overlay camera, I've tried copying the transform from the main camera immediately before rendering, I've tried copying the stereoTargetEye parameter from the main camera before rendering, nothing works. I've tried rendering the overlay with camera.Render() in the script's Update method instead of OnRenderImage and the circular mask clipping disappears then but the geometry is rendered in the exact same location.
     
  4. thep3000

    thep3000

    Unity Technologies

    Joined:
    Aug 9, 2013
    Posts:
    400
    I replied to the issue you submitted, but I'll paste here in case others run into this:

    I think the issue is that you're setting

    overlayCamera.fieldOfView = vrCamera.fieldOfView;

    on the camera that is rendering to texture. The problem here is that OpenVR's native implementation actually renders using an asymmetric projection matrix -- this means that it can't be simply described by field of view. (note that when we render in oculus mode we actually pad the projection matrix to be symmetric at oculus' request to be more backwards compatible, this trades some perf with some extra pixels that we need to throw out, but unity's shaders are generally more compatible and so are cases like this).

    The only work around I can think of right now is to obtain the stereo projection matrices with the SteamVR sdk directly (steamvr will give you the projection as left right top bottom for both left and right eyes, you can calculate the projection matrix with math here: http://www.songho.ca/opengl/gl_projectionmatrix.html)

    If you then set the projection matrix individually for each eye instead of the field of view, you should be able to get it to line up just like oculus does.

    Note that even in the oculus case you are a frame latent, perhaps that's not important for your use case but you may want to try setting the transforms just before you render instead of Update.
     
  5. SamBaker

    SamBaker

    Joined:
    Jun 13, 2013
    Posts:
    4
    I looked into this some more and thep3000 is right, it was just a projection matrix problem. The solution is simple - when you render to a render texture, set your camera's projection matrix to be the vive's projection matrix. Query the device and then if Vive is detected, assign the projection matrix as shown in this thread:

    https://steamcommunity.com/app/358720/discussions/0/405694031550581171/#c405694031552884526

    The only thing I did differently was using namespace Valve.VR and then calling:

    OpenVR.System.GetProjectionMatrix(vrEye, mainCamera.nearClipPlane, mainCamera.farClipPlane, EGraphicsAPIConvention.API_DirectX)
     
    thep3000 likes this.