Google’s standalone VR headset, announced yesterday at I/O, was both obvious and unexpected. People have been talking about a self-contained Google headset for over a year, and speculating (quite confidently) that the company would eventually use Tango augmented reality tracking in lieu of outside cameras. But Google head of VR Clay Bavor didn’t just announce the existence of new hardware — he promised a reference design and two commercial versions, made by HTC and Lenovo, by the end of the year.
That’s an aggressive timeline, suggesting that Google’s already got something in pretty good shape. Despite this, Bavor didn’t show much on stage besides a line drawing of the planned hardware. After the keynote, though, I was part of a small group that got to check out an early prototype — and while it’s too early to say whether Google can pull off a fully self-contained VR headset, the core technology really does work.
Before we started our demos, Bavor cautioned that we were trying a year-old prototype using a previous-generation chipset, with higher latency than we’d see on the upcoming reference design from Qualcomm or the final headsets from Lenovo and HTC. (The reference design will use Qualcomm’s new Snapdragon 835 chipset.)
We weren’t allowed to take photos or video, and the black plastic headset was bulky and bare-bones, although it fit a lot more comfortably than some other prototypes I’ve tried. The weight was better-distributed than on Daydream or Gear VR headsets, where all the electronics are packed in a phone on the front. While the field of view and screen resolution seemed roughly on par with the HTC Vive and Oculus Rift, finished versions may well use different components, so it’s premature to judge it there.
Anyone hoping for some Microsoft-style inside-out motion controllers from Google will have to wait a while. The first generation of Google’s standalone headsets will use an ordinary Daydream controller, which offers only limited motion tracking. This could change in the future, but during our demo, we didn’t even get to use the remote — our experiences were restricted to walking and looking.
The operative element we were supposed to be trying was the WorldSense tracking system, which is indeed based on Tango. Unlike Tango, the current iteration of WorldSense doesn’t include a depth-sensing camera. It relies primarily on front-facing cameras to detect edges in the environment and use them as reference points, so the headset can tell how far you’ve walked in real space and translate that into virtual motion as well.
In the first of two demos, wearers could walk around a virtual ocean floor, complete with wandering jellyfish and a circling sea turtle. The second put them in a large Imperial hanger from Rogue One, complete with a dour K-2SO. Google doesn’t give a hard limit for how much space one of its headsets can track, but I was stationed on a rug with a roughly one-meter radius for both; if I strayed outside it, the world was set to fade out.
Tracking is supposed to improve the more you use the headset in a specific place, as WorldSense builds a better spatial image of it. But Bavor promised that these prototypes had been set to start each demo fresh, as though they had never seen the room before. (As he joked at one point, Google is showing us its standalone headset in the worst shape we’ll ever see it.)
Inside the experience, motion didn’t feel quite as crisp as it would in an externally tracked Rift or Vive. My vision didn’t swim the way it sometimes does with inside-out headsets, but I felt as though the world was drifting with me, just a little bit, especially when I crouched to look at the ground. That said, it felt more natural than my time with Intel’s Project Alloy, Qualcomm’s own Snapdragon-based headset, or assorted other inside-out tracking systems. It’s the first experience I’ve had that rivaled that of Oculus’ Santa Cruz prototype. (I haven’t checked out Microsoft’s self-tracked VR headsets yet, unfortunately.) The world jumped out of place once, but only because I literally cupped my hand over the sensor strip.
I can’t say how well the prototype I saw works on any objective scale. I tried it in a room with good lighting and lots of edges and textures, including a cross-hatched pattern on the rug below me. In dimmer environments with less definition, it may not perform as well. On the other hand, some imperfections might be due to the older hardware and higher latency. Both demos had you move around a limited space with minimal interactivity, not a full-fledged virtual world. But the Rogue One setting was highly detailed, optimized radically (according to Google) through a system named Seurat.
Regardless, it’s impressive to see inside-out tracking doesn’t feel obviously compromised. In an industry where hardware can languish in prototype stage for years, WorldSense could plausibly be good enough to put in the real commercial headsets Google has promised — which could completely change the way people experience mobile VR. If Google sticks to its timeline, hopefully we’ll actually see them in a few months.