On the road for VR: Oculus Connect, Hollywood

After some initial uncertainty, and accidentally raising a stink on reddit, I did manage to attend Oculus Connect last weekend after all. I guess this is what a birthday bash looks like when the feted is backed by Facebook and gets to invite 1200 of his closest friends… and yours truly! It was nice to run into old acquaintances, meet new VR geeks, and it is still an extremely weird feeling to be approached by people who introduce themselves as “fans.” There were talks and panels, but I skipped most of those to take in demos and mingle instead; after all, I can watch a talk on YouTube from home just fine. Oh, and there was also new mobile VR hardware to check out, and a big surprise. Let’s talk VR hardware.

Leap Motion

I got to try a demo with a Leap Motion controller attached to a Rift DK2, and as much as I would have liked for this to work, it still didn’t. The demo was supposed to let me grab and move virtual furniture in a virtual house, which is a very useful VR application if the user interface is precise and reliable. This was neither, and I’m sure it was not the demo developer’s fault, but the device’s. The hand tracking was so bad, in fact, that the demo was completely unusable. Managing to grab an object with my hand was as predictable as winning the lottery, and once grabbed, no correspondence existed between how I wanted to move an object, and how the system read my movement. The developer claimed that the failures were due to the high amount of ambient IR radiation in the room (due to all the Rift DK2s and their tracking LEDs), and my colleagues or I have been the victims of similar things ourselves (like having a Tesla coil demo several rooms down while trying to use a Razer Hydra at Maker Faire two years ago), but this was beyond the pale. Since the Leap Motion was facing a wall, and all IR LEDs from the attached DK2 and all the others were outside its field of view, I really don’t know what it was picking up. The developer claimed that the device works fine in the controlled environment of his lab, and I have to take his word for it. Maybe it’s not quite ready for prime time yet.

Tactical Haptics Reactive Grip

It’s always nice to check in on Will Provancher, and the newest prototype of his Reactive Grip controller didn’t disappoint. Only in his first demo, where a spring was attached directly to the palm of the hand model, was there a noticeable disconnect between the felt force/torque and the visual representation. I suggested adding simple handles to the graphics, which I think would alleviate most of that mismatch. The prototype I tried was attached to a pair of Razer Hydra handles as usual, but I’ve seen pictures of another prototype matched with dual PS Move controllers. Now that’s something I’d like to try.

Figure 1: Me trying Tactical Haptic’s Reactive Grip controller, and looking goofy in the process. Thanks, Will, for not warning me that you’d be taking pictures. 🙂

Samsung Gear VR

Figure 2: A spec sheet for Samsung’s Gear VR.

I haven’t talked much about mobile VR on here yet (apart from my brief review of Durovis Dive, as seen at the 2014 SVVR Expo). The reason is that I haven’t been convinced of its viability so far. Everything I have tested up to last weekend suffered from sub-par optics, poor orientational tracking, and significant lag. I’ve felt that the current state of mobile “VR” was a gimmick, something along the lines “check out how cool my phone is!,” but not something that you would use when nobody’s looking. All that changed. Gear VR has the silky-smooth orientational tracking that comes from a custom inertial sensor, and the low end-to-end latency that comes from having John Carmack fine-tune the OS and the graphics and display pipeline like a Stradivarius.

I was able to try four Gear VR demos: two stereoscopic panoramic videos, a third-person video game, and a “virtual cinema” movie player. I tried the panoramic videos first, the first of which placed me in a yurt of some sort, sitting among a family eating dinner, and the other onto a stage, among the performers of a Cirque Du Soleil-type operation. I wasn’t that impressed with either one. While the videos were high resolution, it didn’t feel like I was really in those places; I still felt like watching a movie on a spherical screen. Yes, there was depth to the video, but the lateral and depth dimensions felt mismatched, and there was significant distortion in the world when looking around. I am kicking myself for not trying to break the video by tilting my head (as I describe in this previous post). That’s what happens when someone is nagging me from off-screen to get on with the demos. 🙂

The third-person game demo, on the other hand, impressed me very much. There was no noticeable lag or distortion when looking around, and I could totally see myself playing that game for extended periods of time. I found the gaze-based aiming method for the player character’s bow and arrow a little jittery, but that was a very minor thing.

The virtual theater demo was very good as well. So good, in fact, that I had a very odd experience with it. According to Oculus, they put a virtual audience into the virtual theater so that the viewer would not feel as alone; in this case, the audience were the four penguins from the Madagascar franchise. Someone thought it would add to immersion to have the penguins react to the the viewer’s gaze by having them turn around and wave when looked at. Only problem: they would uncannily detect my gaze even if I just glanced at the back of their heads momentarily. Since penguins normally don’t have eyes in the backs of their heads, that was downright creepy and disturbing — which is in some ways a good sign. Just dial that down a notch on the next iteration.

The big thing, of course, is the lack of positional tracking. But for the panoramic video viewer, positional tracking would have broken everything, and in the virtual theater it was not necessary because the focus of attention, the movie screen, was relatively far away and therefore only minimally affected by motion parallax. In the third-person game, on the other hand, lack of positional tracking reduced the functionality of the game. Since the game environment was scaled to toy size and presented relatively up close, I really wanted to be able to lean to look around corners, or get a closer view of a game object. The big marketing challenge for Samsung/Oculus will be to distinguish the not-quite-VR experience of the Gear VR from the full experience of whatever the Rift’s consumer version will be. If they do a poor job at that, it might backfire. To be honest, I would prefer if they had called the thing Gear 3D instead of Gear VR.

Verdict: Great optics, great field-of-view, great screen (with noticeable black smear, but I didn’t care much), no noticeable distortion in CG demos, silky-smooth orientational tracking, no noticeable lag, positional tracking somewhat missed. Samsung/Oculus just obliterated the competition. Please try hard to make it clear via marketing that this is not the same VR as available via the Rift et al. Ship it, done.

Crescent Bay Prototype

Figure 3: Oculus’ Crescent Bay prototype.

What a surprise. I was dead certain that Oculus would unveil an Oculus-branded VR input device, to quell the howls of developers. (I fundamentally disagree with the calls for a standard VR input device, but that’s a whole ‘nother story.) Instead, they dropped a prototype for a first consumer version of the Rift. And what a prototype it is.

I hadn’t slept the night from Thursday to Friday because I finally got the DK2 working in Vrui (sans positional tracking) Friday morning at 2am, needed to leave for the airport at 7:45am, and still had to ponder severe judder (which I’ve fixed since then) and pack my suitcase. As a result I slept in until 9am Saturday morning and ditched Brendan Iribe’s keynote (who wants to listen to the only non-technical guy in the line-up anyway?). While I was hanging out in the demo area having breakfast, another developer ran over and urged me to sign up for a demo slot ASAP. I had no idea why he was telling me that, as I hadn’t brought along any demos to show, so I just said yeah, sure, and carried on with my bagel. Only when I checked into reddit later did I see the flurry of threads about the prototype announcement. Oh cripes, I’m at the conference and I still missed the most important bit of news of the day! Luckily for me, I barely managed to grab one of the last demo slots, at 8:30pm.

Let me not beat around the bush, and come right out and say it: this thing is good. In terms of VR experience, it is very close to a CAVE. Even the Rift DK2, which contains the necessary ingredient of positional tracking, wasn’t. Due to the larger tracking volume and generally improved optical tracking, the CB creates a holographic display space about 5’x5’x6′. As long as the viewer stays inside this space, virtual objects appear rock-solid. If you put your hand on a virtual object, you can almost feel it, and if you move around while keeping your hand still, the virtual object will still appear right underneath it (you can’t actually see your hand, but that’s not necessary for this effect — your body’s proprioceptive sense takes care of it). This is the crucial capability that enables effective 3D interaction (if it were combined with a properly-tracked 6-DOF input device, of course), and the reason why the CAVE is so effective (see this ancient video to see it in action).

Why don’t I say it’s identical to or better than a CAVE, given that the CB’s screen is much brighter and has somewhat higher resolution and higher contrast? It’s because the CAVE is still larger, with a holographic display space of 10’x10’x8′ in our case, because it does have properly-tracked 6-DOF input devices, because you see your own body in relation to the virtual objects, and because I still believe it can be used for longer periods of time with less discomfort. But it’s getting close enough to seriously consider head-mounted VR an alternative.

Unfortunately, only a very small number of people have experienced a working CAVE (emphasis on “working”), and I have not had the chance to personally try the only better-known VR experience that I could compare this to, Valve’s famous VR room. I can only report that several others who saw the CB demo rated it on par with Valve’s offering, which is in line with what I expected.

With this out of the way, let’s talk about some details, as much as I know them. My demo attendant was very much on top of his job; I didn’t get to handle, or get a close look at the CB at all. But we do know that the lenses are large and more drop-shaped than circular, and the Rev. Kyle‘s sheer force of personality allowed him to find out that they are two-layer Fresnel lenses. This would help explain the quality of the image, which was in focus all across the large field of view, and it is even possible that the two lens layers form an achromatic doublet, which would simplify lens distortion correction and time warp. I have been disappointed with the DK2’s lenses, and these are an enormous improvement. Here’s a fun Palmer Luckey quote from about a year ago, in response to “why not Fresnel lenses for the Rift”:

“Because [Fresnel lenses] kill contrast, add a variety of annoying artifacts, and don’t actually save all that much weight. They don’t help with form factor, either; Fresnels cannot come close to matching the focal length/magnification of other optics tech.”

If the Rev. Kyle’s observation is correct, then it’s nice to know that even Oculus are just making it up as they go along. Can someone please confirm or refute?

Behind the lenses, the screen is great and its resolution is up to the point where pixels and screen-door effect are not an issue anymore. It might still not be high enough to use the Rift as a desktop monitor replacement, but that’s really too much to ask for at this point in time. Given that the screen is OLED, it still suffers from the temporal ghosting effect known as “black smear,” where objects on a complementary-colored background leave a shadow of sorts behind as they move. The effect was subtle, however, much less noticeable than in the Gear VR, and didn’t bother me. It is possible that the new driver software contains a correction filter that adds a little of the difference between a pixel’s previous and current color value to the next value to account for pixel switch times. This is a similar idea to the upcoming cross-talk reduction filter in Vrui, which does the same thing between the left and right eye images on an active or passive stereo screen. Whether CB indeed does this, or if it’s an improved screen, is pure speculation on my part, obviously.

Then there is the reduced weight, of course. I’d guess that CB weighs about half as much as DK2, which really helps to keep it securely placed even under fast head movements, and reduces long-term discomfort. What I didn’t like so much was the padding material: it felt more like a rubber lip than a foam strip as in DK1 and DK2, and every person coming out of the demo room had a nice red goggle welt on their face, even after only 10 minutes. That might have been intentional, of course, to prevent people from going in twice. I’m going to trademark the phrase “Crescent Bay Face.”

I don’t have deep insights into improved tracking besides the obvious, the addition of tracking LEDs to the back of the head strap for full 360° positional tracking from a single camera. The back LEDs are clearly a backup solution, with the focus still being on frontal tracking. There are simply more LEDs on the headset itself, and in a more varied 3D distribution, and that has big impacts on tracking quality. What impressed me was that there was no noticeable hand-over between front tracking and back tracking. Because the front and back LEDs are only connected by an elastic strap, and their positions relative to each other depend on the user’s head size and shape, it is not trivial to calculate a consistent head position when switching from one set to the other. The tracking software must auto-calibrate itself while observing the front and back LEDs simultaneously during those times when the user faces sideways, and unfortunately I didn’t test this hypothesis by obscuring the back LEDs until fully turned around, but no matter what, getting that to work smoothly can’t have been easy.

The combination of more LEDs, including back LEDs, and larger field-of-view in the tracking camera led to very stable tracking over a large area, and, as I mentioned above, is probably the main cause of improved VR experience over DK2. In the controlled environment of the demo setup, it just worked.

One paragraph about latency, or the lack thereof. With time warp technology, it is now relatively straightforward to reduce apparent head motion-to-photon latency to very low, even sub-frame levels (via front buffer rendering and chasing the beam). What is still not easy is reducing object update-to-photon latency. What I mean by that is that time warp can take a frame rendered for one perspective, and re-render it for a slightly different perspective based on just-in-time head tracking data. What time warp can not do is change the contents of that frame. It can not move objects or update animations, because those changes would require a completely new rendering pass. So the fact that there was one demo that showed an object in the environment reacting to user head position and orientation, without noticeable lag, was highly impressive. Granted, the 3D environment in that demo was rather simple in 2014 terms, and who knows what kinds of CPU and GPU horsepower Oculus crammed into those demo machines, but still.

Last but not least, I’m a very visual-focused person, so audio in VR has never been the number one priority for me, but it was nice to see that CB has built-in headphones (and maybe even a microphone, according to rumors!), and what appeared to be a solid spatial audio infrastructure. I could not tell whether it was binaural or “mere” stereo spatial sound, but I felt the location of the sound sources matched the visual cues, and I don’t have any complaints about the headphones’ quality. They were an open design, and I did hear people in the other demo booths oohing and aahing, but I prefer to be able to hear something from my real surroundings even while in VR, so I was fine with that. Built-in sound input/output would be very beneficial for my own tele-presence work, at least.

My only concern is that the demos Oculus showed (I’m trying very hard not to spoil anything) were obviously hand-optimized and fine-tuned to run smoothly at the higher resolution and frame rate of the CB (reportedly 90Hz or even higher). There was almost no interactivity (besides a tiny bit in one demo that I’d argue wouldn’t cause unpredictable rendering time spikes), and it is therefore possible that the demo developers reduced latency further than would normally be possible by pushing the render loop to the end of the video frame, as I describe in this previous article. That would be a cheat of sorts, because that kind of optimization is not really applicable to truly interactive applications. In other words, it is not clear yet that the Crescent Bay would perform quite as brilliantly outside the incredibly tightly controlled demo environment in which it was shown.

Verdict: The Crescent Bay prototype is a very good fully-functional VR headset, or as I would put it, holographic display environment. It is ready for serious use. If Oculus decided to slap a “consumer version 1” sticker on it tomorrow, I would have no problem with that (besides the total lack of software, duh). Ship it, done.

Now let me indulge in another bit of wild speculation. Here’s a very odd thing I noticed about CB; something that I haven’t heard mentioned by anybody else. Anyway, here goes. My IPD (60mm) is significantly below population average, and the Rift’s default setting, of 63.5mm. This means that when I put on a Rift with default settings, the apparent scale of the environment and objects within it is too small by about 30%, give or take. This is most obvious when standing on a virtual floor. Head tracking gives the software the exact height of my eyes above the real floor, and at least my VR software translates that 1:1 into the displayed images. But with the default IPD, I can clearly see the virtual floor located roughly at the level of my knees. I do not need to see my feet to notice this; my body’s proprioceptive sense tells me almost exactly where my feet should be, but are not. If I set my software’s IPD to 60mm, the floor appears exactly where it should be.

Now here’s the kicker: during the CB demo, the virtual floor appeared exactly where it should have been. I never told the demo attendant my IPD, and we never ran through any kind of calibration procedure. So what gives? There are several possible explanations. For one, Oculus might have found some magical bullet that removes the need for IPD calibration via optical means, by using special lenses with special properties. But that can’t be it; differences in IPD affect parallax in the left/right images, and those can not be replicated by post-processing, optical or digital. If my right eye is on the right side of a polygon, and my left eye on the left, there is no post-processing in the world that can warp one view to the other. The only other explanation, besides sheer coincidence, is that Oculus implemented an automatic calibration method on the sly. Maybe the tracking camera took pictures of my face while I was looking at it before putting on the headset and calculated my IPD from that, or maybe, just maybe, there are eye tracking cameras in CB. Now that would be quite the ace to have up their sleeve. The suspense is killing me.

One final thing: several people commented on the lack of simulator sickness after their CB demo, as if Oculus had somehow solved the problem for good. Not so. They removed one cause of simulator sickness, perspective mismatches, via excellent positional tracking and (maybe?) automatic calibration and (maybe maybe?) dynamic vergence updates via eye tracking. They circumvented the biggest other one by presenting static VR experiences, where the viewer is free to walk around inside the physical holographic display space, but where the physical space does not move with respect to the virtual space (i.e., where there is no software-induced locomotion). There were two demos where the viewer was moved through space, and in both cases the motion was slow and of constant velocity, reducing vestibular cue mismatches. But still, the moment I started moving in the first demo that did so, I did feel it in my stomach, and it didn’t feel good. Don’t heave a sigh of relief yet; simulator sickness is still a thing.

Survios

Now here was a major bummer. I managed to get myself invited to Survios‘ demo and after-party on Saturday evening, and I was really looking forward to that (the demo part, that is). I am currently building a new VR lab on UC Davis campus, and it is similar in aim to what Survios are doing: large area, multi-user, shared head-mounted VR. I have been following Survios since early in the Project Holodeck days, and very much wanted to try their system and compare notes with them. But it wasn’t to be. We arrived a little late due to my late CB demo slot, I was number 20+ in line for a demo, and after only five people or so had run through, the entire system went black for lack of battery power. And that was it for the night. One would think $4 million in venture capital funding would pay for more than two battery packs.

Closing Thoughts

If all of Oculus Connect had been nothing but the unveiling of, and demos with, the Crescent Bay prototype, I would still have considered it a worthwhile event. With everything else thrown in, it was so much more. My only regret is that I didn’t manage to corner one of Oculus’ triumvirate to convince them to help me in developing a Vrui driver for the Rift DK2 by disclosing those parts of the low-level USB protocol that shouldn’t need to be trade secrets, so I guess I’ll have to take that request to the Oculus developer forums and hope for the best.

21 thoughts on “On the road for VR: Oculus Connect, Hollywood

  1. If the rubber on the Oculus was conductive rubber, would it be possible that they had a small array of EEG-like sensors embedded and used the signals produced by the eye muscles to calculate eye position?

    • I guess that’s possible to track eye movements during use, but they’d still need to somehow detect your face’s overall shape and the resting position of your eyes beforehand, in order to infer vergence etc. from muscle activity later.

      • Wouldn’t an array like that be capable of triangulating/trilaterating the position of the muscles in relation to the array, and from that infer the position of the eyeballs? And wouldn’t it be easy to calibrate on-the-fly by just watching for the highest values in each direction?

        Actually, isn’t the rest position for the eyes the one that makes the least amount of noise?

        • Electrooculography is great for telling you saccade duration and velocity, and can give you a rough estimate of direction, but it really isn’t suitable for measuring eye position. Certainly over long timeframes (it’s that nasty integration of error that bites IMUs). And that’s in a lab environment with medical skin-contact sensors positioned directly over muscles. Add trying to do it with a low-conductance sensor (impregnated rubber) far away from the desired muscle groups with no reliable fitting?
          If Oculus are using EOG, then I will eat a particularly large hat.

          • Hm…

            It can’t triangulate the position of the muscles in relation to the sensor array using the measured intensity and/or timing of the changes?

            And why would there be integration error? Aren’t the readings in absolute values and proportional to how much the muscle is tensed?

            From what I remember from a couple years ago when I tried neurofeedback therapy, blinking or looking away would make all the readings, even the ones from the back of the head, spazz out; doesn’t sound like it would be that hard to get readings from just around the eyes’ region

  2. (Oops, I forgot to mark the new comments notification checkbox; doing it now, sorry for this useless additional comment.)

  3. For your wild speculation, don’t you think that it could be an illusion like an optical ilusion?
    your feet touch the ground, so the ground is at this level, no matter what our eyes see …
    (sorry for my english, it’s not my native language… ;))

    • It could be, but I typically use apparent floor position as an indicator of the calibration quality of a VR system, and as a guide during calibration. I find it quite differentiating; it seems that our body is fine-tuned to know the position of our feet without looking at them. It would make evolutionary sense, to support walking while looking forward.

      In short, in this particular case I don’t think it’s an illusion. Might still be coincidence, though.

  4. So disapointed about leap demo. I have one and rarely use it, ill download the demos to debunk interference claim. You should add support for the leap in vrui? For an above comment, I dont know if the conductive rubber would work good enough at a price point we could bear(but they do have 2+ billion to get it there). Ive worked with the emotiv epoc, a real time eeg computer peripheral, and it has a facial recognition mode(which is ok) and the consumer version is only $350. And their in the forefront of low cost eeg computer interface. I would love to see support for the epoc as well. The floor position could be calibrated by a simple tiny ir laser range finder device on the cb and when you look down its calibrated, but fingers crossed for eye tracking. I cant wait to see your new room setup. If you need a cross country remote tester, feel free lol.

    • It’s my understanding that the muscles are much “louder” than the brain, so it probably isn’t as expensive to make something that just listen to muscles instead of a device that tries to pickup brainwaves.

      • The most important thing, its going to have to be a good implementation. The facial suit on the epoc is ok but not good enough and the cheapest is 300-350. So im going to venture and say the r&d, plus the hardware integration and matetials would equate to atleast an extra 150 dollars added to their end product. Now you, me, and the rest of our kind(nerds and techies) are going to have a hard time finding any reason to not buy it, they could double and most would still. But that feature/sensor/hardware would add about 50% more price, public no likie. But thinking about it if they did implement it they could market it as a stess relieaver, etc. It would be able to sense when you have eye strain, eye tracking somewhat(its not super accurate on the epoc), and taylor the environment to balance your mind, just to name a few. All of which will add to immersion, and allow you to stay in vr longer. Eventually it will be there, but I dont think on the first go round, but with 2 billion, john carmack, and knowing your litterally going to be able to reach every facebook member aka everyone in the world, anything could happen. Can you image zero day consumer drop of rift there are going to be ads everywhere, it will be talked about for decades to come, or until its as common as a bandade(bandade is a brand not an adhesive bandage).

  5. Interesting as always. With or without positional tracking DK2 in Vrui is still very good news, nice work man! The anti-ghosting mechanisms also sounds quite exciting. Let me know if you need test pilots or if you plan on releasing the new Vrui any time soon!

    • It’s probably one of those things that work OK in a controlled lab environment, but break down once you try them in the wild. In this particular case I’m not sure what caused the problems. Having a raw view of the Leap’s camera feeds would probably be a useful debugging feature for situations like these. If you knew what the device is actually seeing, you might be able to counteract.

      • The problem was no doubt caused by quartz halogen lights in the ceiling – they are common in conference centers, whereas labs in offices and schools almost invariably have fluorescent lights which emit much less IR.
        I tried the original version of castAR in 2013, and the tracking was perfect – until the conference room ceiling lights directly overhead were turned on so the Tesla coil people next door could pack up.
        This is not a problem for home or office use, but people doing conferences or demos should be aware of it.

  6. Pingback: On the road for VR: Silicon Valley Virtual Reality Conference & Expo | Doc-Ok.org

Leave a Reply

Your email address will not be published. Required fields are marked *