Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Why are we not seeing more AR apps made in Unity?

Discussion in 'AR/VR (XR) Discussion' started by wingrider888, Sep 22, 2016.

  1. wingrider888

    wingrider888

    Joined:
    Oct 27, 2010
    Posts:
    38
    Hey Guys,

    Wanted to know from all the developers on why we are not seeing great markerless AR apps and games made with Unity? Is lack of a really powerful SDK the main reason? Or are there others?

    I wanted to get a sense of what the community felt.


    PS - Sorry for posting a question on AR in the VR part of the forums. But there wasn't a separate AR section.

    Cheers!
     
  2. hassank

    hassank

    Joined:
    Nov 18, 2015
    Posts:
    48
    There have been lots of AR apps with Unity even before Pokemon Go, but none were nearly as big of a hit. There are a few answers to your questions, but the top factors in my mind are as follows:
    • Design Questions: it's hard to design AR experiences. Interaction techniques and mechanics are still being explored. It's a great pioneering opportunity, but many experiences end up feeling gimmicky. We can save ourselves time by reading up on the vast amount of research that has been done in the space.
    • Tech Limitations: will the app run on phones or a HMD? Both have tech limitations - for example, it's difficult to interpret the real world with cameras but the tech is rapidly improving! Check the new phones coming out or RealSense from Intel: https://developer.microsoft.com/en-us/windows/holographic/academy
    • Market: Before, the above factors kept the market slim. That's rapidly changing now.
    I'm speaking from experience of building a few Hololens prototypes. The toolset is surprisingly great (thanks Unity!) and Microsoft has a helpful set of tutorials: https://developer.microsoft.com/en-us/windows/holographic/academy.
     
  3. WendelinReich

    WendelinReich

    Joined:
    Dec 22, 2011
    Posts:
    228
    Great answer @hassank. I'll add a few points.

    You asked specifically about markerless AR @wingrider888. That's virtually impossible to do on devices with a single rear-facing RGB camera. Markerless AR simply needs dedicated hardware. Right now you have a few overpriced HMDs (HoloLens etc) and then there's Google Tango, where the first consumer-friendly phone (Lenovo Phab 2 Pro) is supposed to appear before X-mas.

    I've been working with Tango for over a year now, and IMO it's the way to go. The SDK is excellent and not hard to get into - in the future, I'd expect this to be as easy as the one-button VR support Unity now has.

    But Unity really isn't the limiting factor hear. It's first and foremost a hardware/device issue, and second a huge design problem (no-one really knows how people want to play games in AR).

    Cheers /Wendi
     
    hassank likes this.
  4. WendelinReich

    WendelinReich

    Joined:
    Dec 22, 2011
    Posts:
    228
    PS Unity moderators, I think it's time you set up a dedicated AR section! ;-)
     
  5. hassank

    hassank

    Joined:
    Nov 18, 2015
    Posts:
    48
  6. wingrider888

    wingrider888

    Joined:
    Oct 27, 2010
    Posts:
    38
    @hassank @WendelinReich Thanks for the elaborate answers!

    @WendelinReich I also really believe that Google Tango tango type devices which have Depth or even Stereo sensors are the most accurate way to estimate depth to enable true accurate Markerless Augmented Reality.

    But that being said, we have internally had a fair bit of success in using some highly modified SLAM algos to makes markerless work across iOS and even Android devices . We have also got some ways of approximating scale, doing collision, occlusion etc.

    Ofcourse we never claim it to be as accurate as depth sensors(mathematically we cant :) ) but i think it gives us good enough results for multiple use cases. This could be useful considering affordable, mass produced mobile depth sensors/ AR devices are a bit away in the future :)

    Let me know your thoughts, i will share some demos in the coming days as well :)
     
  7. hassank

    hassank

    Joined:
    Nov 18, 2015
    Posts:
    48
    @wingrider888 I'd love to see demos and it's great to hear you've had some success with SLAM algs. That's more of a testament to the great work of you and your team than a trend we can expect from every team though! As you mentioned, it takes modifications and expertise so we'll see more production when there are more accessible and affordable out-of-the-box solutions.

    It'd be great if someone could chime in on the Meta dev experience (http://buy.metavision.com/products/meta2) since the HMD is 3x cheaper than a Hololens. I'm a huge Hololens fan but would like to know more about what's available with a lower budget.
     
    JoeStrout likes this.
  8. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Dragging up this old thread because I, too, was wondering about Metavision's product. I was surprised that this post was the only mention of the company or the Meta 2 in the forum.

    From what I've been able to find, the HoloLens is definitely the more advanced product. HL's environment mapping and accuracy "locking" objects to the physical environment is rock solid, whereas reviewers have stated Meta 2 can't keep up with fast head movement. Things sort of drift, then snap back to place when you stop moving. And of course, there is the big one -- the Meta 2 is tethered to a desktop computer (I gather Metavision does hope to wind up with a stand-alone product... some day).

    One interesting variation in the products is that HoloLens is designed to support interactions at a distance (MS recommends 2-5m) whereas Meta is solely interested in arm's-length (up to 0.5m) interaction. I'm not sold on the short range design though -- it's limiting and I suspect eye strain will be a problem. Avoiding eye strain is why MS has a minimum recommended distance -- at 2-5m your eyes are roughly parallel and relatively relaxed for correct focus.

    $3K for a HoloLens dev kit, a product that is not really shipping and is already effectively obsolete in terms of future revisions, is tough to swallow. Despite the limitations, $1K for Meta 2 is in the "maybe" category for us.
     
    JoeStrout likes this.
  9. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,670
    For better or worse, it seems Meta has focused on apps in which you reach out, grab something, and manipulate it with your hands. So arm's-length seems appropriate for this use case, and a little environment mapping lag in exchange for a lower cost also seems reasonable given that you'll be looking at your hands, not spinning around a room.
     
  10. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Consider, for example, several people collaborating on something "on" the table in front of them. Every time you walk around that table, the object is going to be bouncing around until it stopped moving. From the reviews I read, it sounded like the problem was pretty serious. Not "spinning around the room" but simply looking to the side too quickly. In any case, apparently Meta acknowledges it's a problem they're working on.
     
  11. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,670
    I'm glad they're working on it. I noticed the bounce when playing around with a unit. Once I stopped moving and was focused on manipulating the AR object in front of me, it really wasn't an issue. But you're right that, in its current incarnation, it's not ideal for room scale AR.
     
    JoeStrout likes this.
  12. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    I ran across this article about the Meta 2: What the Meta 2 Means for AR Developers.

    I'm pretty intrigued by the hardware. The tether is an obvious limitation, of course. And if the images aren't rock-solid, that's definitely a concern. But the 90° FOV is awesome.

    I find it really exciting the different half-baked AR technologies coming out right now:
    • Hololens with its stable and persistent world mapping (objects stay where you put them, all over the house), but too expensive and with a tiny FOV
    • Meta 2 with its giant FOV, but tethered, maybe not all that stable, and maybe not persistent (not sure about that one)
    • iOS ARKit with its great stability using visual perception, but nonpersistent, and of course only viewable through the tiny tiny window of a handheld phone
    It's not at all hard to see where this is going — in a few years we'll have devices that put it all together with a large FOV, persistent and stable full-world mapping, and rock-solid AR images you can see anywhere (all with a lightweight headset that hopefully doesn't make you look like a total dork). Oh, and it'll cost under $500. Unless Apple makes it — which seems likely — in which case it'll cost $750, but it'll be awesome.

    Oh yeah, and whatever it is, you'll be able to develop for it in Unity. That much has become pretty much a constant of all the contenders. Hooray! :)
     
    WendelinReich likes this.
  13. WendelinReich

    WendelinReich

    Joined:
    Dec 22, 2011
    Posts:
    228
    Just stumbled over this post of mine from last September:

    Boy have I been proven wrong by both Facebook's and Apple's recent tech! :p

    But then again, so has Google. I just hope their Tango software stack is powerful enough to be usable on phones with normal RGB cams. Because otherwise, Tango is fiting an uphill battle...
     
    JoeStrout likes this.
  14. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Indeed. I was also looking at the Mira Prism today. It's a very clever low-cost solution — basically Cardboard but for AR. I can't imagine the head tracking is all that solid, though.

    I thought it might just be possible to use ARkit with it. But the rear-facing camera (which is the only one ARKit will currently use) would be looking mainly at your forehead/hair. Not much help there. It looks like they're relying mainly on markers, plus maybe some gyro-based head tracking when no markers are available (which is sure to float a bit).

    So now it's mostly a matter of putting all the parts together... get camera(s) in the right place(s) to do markerless visual tracking and SLAM, plus screens that provide a wide FOV, on something with the power of (or maybe actually is) a smartphone, and we'll really be in business!
     
    WendelinReich likes this.