Search Unity

Apply camera as texture to spatial map

Discussion in 'VR' started by Gentatsu, Nov 2, 2016.

  1. Gentatsu

    Gentatsu

    Joined:
    Oct 21, 2016
    Posts:
    6
    I'd like to do apply the camera's capture results to the spatial map captured by the SpatialMapping manager. Basically, this:


    I'd like to do that in real-time. I send my mesh data to a server, where I then visualise it. I want to be able to just do that with textures. I saw the photo blending thing, and that seemed like it would be quite similar, but I'm not really sure how I'd get from that to this. Any ideas?

    Thanks!
     
    KarlosZafra likes this.
  2. tellutwurp

    tellutwurp

    Joined:
    Nov 4, 2016
    Posts:
    1
    Let me know when you figure it out.
     
  3. botgreet

    botgreet

    Joined:
    Dec 26, 2015
    Posts:
    6
    I've been experimenting a little with this but not yet fully successfully...freezing updates and shrinking the mesh, and then using a projector from the Standard Assets > Effects to try to display camera images. Have used diffuse mobile shader for the mapping mesh which does pick up the projector colors fine - but haven't figured out the math if I can reproject from the spot where the photo was taking - what I need to change in parameters or if I'm mistaken that it is possible to make it semi-realistic ( would expect distortion not sure how majro). Been trying to do so statically for one view/wall... but not sure yet if this can work or if need to find other way to combine the mesh and photos from the HoloLens...So similar boat. Was trying to work it out from this example which instead of stills uses a movie texture to project on a mesh: http://answers.unity3d.com/questions/8997/projecting-a-movie-texture.html
     
  4. botgreet

    botgreet

    Joined:
    Dec 26, 2015
    Posts:
    6
    I don't think he's using a projector in the video, looks like using quads with images at camera angles and then fused or overlaid...so using the camera API to get the original image and then resizing the captured image on a quad to match the equivalent area that was captured in the photo and aligning it within the room view.
     
  5. Unity_Wesley

    Unity_Wesley

    Unity Technologies

    Joined:
    Sep 17, 2015
    Posts:
    558
    With the spatial Mesh there are no UV's, so applying a texture or picture is not possible. A way around this is a user might be able to use a shader to calculate everything needed for them based on the spatial mesh, then apply a texture through that shaders calculations.

    The technique is call Tri planar mapping.

    Video Link


    @BrandonFogerty can provide more information if needed.
     
  6. zalo10

    zalo10

    Joined:
    Dec 5, 2012
    Posts:
    21
    You can reproject your camera image onto a mesh either by setting the mesh's texture to the camera feed and setting the vertex UVs using a script or by reprojecting inside of the pixel shader.

    The former technique can be extended towards the construction of a persistent texture atlas.
    The latter technique has pixel-perfect accuracy, but can only ever wok for what is currently visible to the user's webcam.

    Attach this to the mesh you'd like to have reprojected UVs:
    Code (CSharp):
    1. using UnityEngine;
    2. public class MeshReproject : MonoBehaviour {
    3.   public Camera ProjectionPerspective;
    4.  
    5.   public float width = 0.2f;
    6.   public float height = 0.2f;
    7.  
    8.   Mesh mesh;
    9.   Vector3[] vertices;
    10.   Vector2[] uvs;
    11.  
    12.   void Start() {
    13.     ProjectionPerspective.enabled = false;
    14.     mesh = GetComponent<MeshFilter>().mesh;
    15.     mesh.MarkDynamic();
    16.   }
    17.  
    18.   void Update() {
    19.     ProjectionPerspective.projectionMatrix = ManualProjectionMatrix(-width, width, -height, height, ProjectionPerspective.nearClipPlane, ProjectionPerspective.farClipPlane);
    20.  
    21.     vertices = mesh.vertices;
    22.     uvs = new Vector2[vertices.Length];
    23.     Vector2 ScreenPosition;
    24.     for (int i = 0; i < uvs.Length; i++) {
    25.       ScreenPosition = ProjectionPerspective.WorldToViewportPoint(transform.TransformPoint(vertices[i]));
    26.       uvs[i].Set(ScreenPosition.x, ScreenPosition.y);
    27.     }
    28.  
    29.     mesh.uv = uvs;
    30.   }
    31.  
    32.   static Matrix4x4 ManualProjectionMatrix(float left, float right, float bottom, float top, float near, float far) {
    33.     float x = 2.0F * near / (right - left);
    34.     float y = 2.0F * near / (top - bottom);
    35.     float a = (right + left) / (right - left);
    36.     float b = (top + bottom) / (top - bottom);
    37.     float c = -(far + near) / (far - near);
    38.     float d = -(2.0F * far * near) / (far - near);
    39.     float e = -1.0F;
    40.     Matrix4x4 m = new Matrix4x4();
    41.     m[0, 0] = x;
    42.     m[0, 1] = 0;
    43.     m[0, 2] = a;
    44.     m[0, 3] = 0;
    45.     m[1, 0] = 0;
    46.     m[1, 1] = y;
    47.     m[1, 2] = b;
    48.     m[1, 3] = 0;
    49.     m[2, 0] = 0;
    50.     m[2, 1] = 0;
    51.     m[2, 2] = c;
    52.     m[2, 3] = d;
    53.     m[3, 0] = 0;
    54.     m[3, 1] = 0;
    55.     m[3, 2] = e;
    56.     m[3, 3] = 0;
    57.     return m;
    58.   }
    59. }
    First set the Camera Projection Matrix variable in C#:
    Code (CSharp):
    1.   void LateUpdate() {
    2.     Shader.SetGlobalMatrix("_camProjection", FrontCam.projectionMatrix * FrontCam.worldToCameraMatrix);
    3.   }
    Then calculate the UVs inside of the mesh's material's shader:
    Code (CSharp):
    1.             uniform float4x4 _camProjection;
    2.  
    3.             float2 WorldToViewport(float4x4 camVP, float3 worldPoint)
    4.             {
    5.                 float2 result;
    6.                 result.x = camVP[0][0] * worldPoint.x + camVP[0][1] * worldPoint.y + camVP[0][2] * worldPoint.z + camVP[0][3];
    7.                 result.y = camVP[1][0] * worldPoint.x + camVP[1][1] * worldPoint.y + camVP[1][2] * worldPoint.z + camVP[1][3];
    8.                 float num = camVP[3][0] * worldPoint.x + camVP[3][1] * worldPoint.y + camVP[3][2] * worldPoint.z + camVP[3][3];
    9.                 num = 1.0 / num;
    10.                 result.x *= num;
    11.                 result.y *= num;
    12.  
    13.                 result.x = (result.x * 0.5 + 0.5);
    14.                 result.y = (result.y * 0.5 + 0.5);
    15.  
    16.                 return result;
    17.             }
    Once you have this, you can sample your texture using:
    Code (CSharp):
    1. uv = WorldToViewport(_camProjection, i.worldPos);
    (Make sure worldPos is being passed in through i)
     
  7. Gentatsu

    Gentatsu

    Joined:
    Oct 21, 2016
    Posts:
    6
    I'm not sure how I'd map individual images to each mesh. I tried doing it on when a surface is received, to get the current image/texture of the camera, and apply it as such above, but it just gives me a pseudo-skybox view with some mildly warped meshes following the camera texture.
     
  8. Gentatsu

    Gentatsu

    Joined:
    Oct 21, 2016
    Posts:
    6
    I managed to get the GPU Reprojection code shader working! Thank you! But yes, like you said, it only works for what the camera currently sees. It either repeats/clamps everything else until you move to it, causing the prior ones to lose their texture. I tried snapshotting it, and setting it that way, but that doesn't really work either.