My friend Serban got his Oculus Rift dev kit in the mail today, and he called me over to check it out. I will hold back a thorough evaluation until I get the Rift supported natively in my own VR software, so that I can run a direct head-to-head comparison with my other HMDs, and also my screen-based holographic display systems (the head-tracked 3D TVs, and of course the CAVE), using the same applications. Specifically, I will use the Quake ||| Arena viewer to test the level of “presence” provided by the Rift; as I mentioned in my previous post, there are some very specific physiological effects brought out by that old chestnut, and my other HMDs are severely lacking in that department, and I hope that the Rift will push it close to the level of the CAVE. But here are some early impressions.
The Oculus team went all out on production values. The dev kit comes in a custom fitted black plastic carrying case (see Figure 1), and the unit itself looks much more refined than I would have expected of a dev kit. It’s black matte plastic all the way around, from the headset itself to the controller box.
This is, of course, the most important bit. And I’m very relieved that the Oculus Rift is as good as I had hoped. It’s surprisingly light, and the “ski goggle” design, which had me slightly worried, actually works. One unexpected benefit of the design is that it’s possible to put on and take off the unit without having to deal with the head straps, just by holding it up to one’s face, and still get the optimal view. That’s very important while messing around or debugging software; the HMDs I have (“ruggedized” eMagin Z800 Visor and Sony HMZ-T1) are way too cumbersome to put on or take off. Yet another benefit of the design is that it really only fits in one particular spot, so it requires much less mucking around to position right and get a good view of the screens. When using the straps, the display sits tight. Even shaking my head didn’t dislodge it, or shift it out of the optimal viewing position. Given that these are soft elastic straps and not the head vises found in many other HMDs, that’s very good. I’m not saying the Rift is exactly pleasant to wear, but compared to it, my Z800 is a headcrab.
On to the optics. I am utterly impressed by the optical properties of the lenses, especially considering how strong they are. Once the display sits properly (and it’s easy to seat), the entire screen area is in focus and clear. This is very different from my Z800, where I’ve yet to find a position where the screens are entirely in focus, and even from the HMZ-T1 with its better optics. There is very little chromatic aberration; I only saw some color fringes when I started looking for them. Given that the Rift’s field of view is more than twice that of the Z800 and HMZ-T1, it’s an amazing feat.
Compared to the lenses’ total win, the screen itself is a bit more dodgy. And it’s not just the resolution. For comparison: the Z800 has 800×600 pixels per eye, the HMZ-T1 has 1280×720 pixels per eye. The Oculus Rift dev kit has 640×800 pixels per eye (the consumer version is supposed to get 960×1080 pixels per eye). Now, the Rift has significantly more solid angle real estate over which these pixels are spread, so it is comparatively low-res, but that didn’t really bother me. No, the Rift’s problem is that there are small black borders around each pixel, which feels like looking through a screen door attached to one’s face all the time. I found that quite distracting and annoying, and I hope it will get fixed.
The built-in orientational head tracker is another solid showing. It feels very precise, and I did not notice any drift. There were reports of yaw drift, and that the included magnetic compass was not part of the tracker’s sensor fusion algorithm, but I didn’t notice anything bad. I did notice that, when I was turning my head quickly left and right, the horizon tilted quite a bit during those motions, but it was only slightly distracting. This might be caused by the sensor itself, or by the current iteration of the sensor driver.
These early observations are based on the demo software that comes with the dev kit, specifically the Tuscany demo (both the Unity3D and native SDK version), and the very simple — yet ultimately most interesting — “tiny room” demo.
Before going on, a warning. When we first fired up the Tuscany demo from within Unity3D, I was ready to toss the Rift into the garbage and call it a day. For some reason, the scene player did not properly open its rendering window on the Rift’s screen (the fullscreen button only maximized the window, but didn’t remove the decoration), and as a result the field of view and aspect ratios didn’t line up properly. The entire scene was wobbling like so much Jell-O, and the architecture came straight from the nightmares of Dr. Caligari. It was an express train to vomittown. Fortunately, some googling and twiddling with settings got this sorted out, and then everything was fine. Still, not a pleasant out-of-box experience.
First stop, calibration. This is a big issue for any HMD, as miscalibration is (in my humble opinion) a larger cause for motion sickness than lag. I cannot judge overall calibration yet; these demos are not “hands on” enough to really evaluate that. I’ll have to wait until I can run the Nanotech Construction Kit, or the Virtual Clay Modeler. What I can judge is lens distortion correction, and on that the developers did a bang-up job. Straight lines in the virtual world look perfectly straight (this is where the “tiny room” demo with its high-frequency and linear artificial textures came in handy).
The Rift (or rather its SDK) does lens correction via post-processing. First, the virtual world is rendered into a “virtual” camera image, which is then resampled using a simple radial undistortion formula based on a quadratic polynomial. The fundamental problem with this approach is that it has to resample a 1280×800 pixel image into another 1280×800 pixel image, which requires very good reconstruction filters to pull off. The SDK’s fragment shader simply uses bilinear filtering, which leads to a distinct blurriness in the image, and doesn’t seem to play well with mipmapping either (evidenced by “sparkliness” and visible seams in oblique textures). The SDK code shows that there are plans to increase the virtual camera’s resolution for poor-man’s full-scene antialiasing, but all related code is commented out at the moment. For comparison, we turned off lens distortion correction, and the resulting views seemed significantly crisper (albeit distorted, duh). Interestingly, lens distortion was a lot less pronounced than I had expected, given the Rift’s wide field of view and the big talk of higher foveal resolution — still, turning off lens correction is not something you’d ever want to do; it was just a test.
Another nice calibration-related side effect of the great wide-angle lenses is that it’s easy to tell whether the display sits right when inside an application. Even small shifts from the ideal position lead to very noticeable radial distortion in the views; this should make it easier to train users to put on the display correctly. There is nothing worse than miscalibration that’s too subtle to detect consciously, but still strong enough to cause nausea. The final SDK could include a “splash screen” of sorts that displays a grid on both eyes, and asks the viewer to slightly shift the unit until all grid lines look straight. That should work just fine.
On to latency. The first thing I noticed in the Tuscany demo was too much motion blur. We dug through the code, but could not find out what exactly is responsible for it. It is possible that the display screen can’t switch pixels fast enough, meaning that it’s a hardware effect that can’t be addressed, but wildly moving windows around while the Rift was mirroring the desktop didn’t appear to blur as much. So we believe it’s an intentional effect, buried somewhere in the SDK code where we haven’t looked yet. It’s possible that there is a recursive low-pass filter “hidden” in the lens distortion correction shader, enabled by some other bit of code globally enabling alpha blending with constant opacity. Must investigate further. (By the way: major kudos on providing the SDK source.) Update: It appears I was wrong, and hunting for motion blur code in the SDK was a wild goose chase. I forgot — and that’s embarrassing — that the Z800’s display is based on OLED, with 100x-1000x faster response times than LCD. So the motion blur that surprised me might just be the nature of the LCD beast. But let me make this clear: while I noticed the motion blur, and — had it been intentional — would have dialed it back, it was in no way a show stopper.
But apart from that, the display felt “snappy.” I would say total latency is comparable to the HMD-based system I’m showing in my latest video, maybe a tad higher. But that could just be the motion blur. I won’t speculate more until I can run a fair test, but I will say that the motion prediction time delta in the tracking driver code appears to be set to 50ms.
A word about the SDK in general. It contains everything needed to get software running on the Oculus Rift (that’s the plan, obviously), but I’m skeptical it will lead to software that’s portable to other display systems (obviously, that’s the plan, too). For example, the camera model embodied by the SDK does not generalize to projection-based environments. Doesn’t really surprise me, and doesn’t matter to me either — once the Rift works with Vrui I won’t look back — but it’s still noteworthy. Game developers who want to support the Rift and something else will have to work hard.
Aside from that, the SDK is decidedly low-level, and that’s not a surprise, either. The danger here is that each developer will have to roll their own interaction and navigation methods, and that can be a very bad thing. My canonical analogy is mouse look in desktop first-person games: imagine a world where half of all games move the view up when a player pushes the mouse forward, and the other half move the view down, with no way to customize that. Now imagine that a thousand times worse. I’m hoping that interaction standards will emerge very quickly, and the Unity3D binding might help with that.
On to the big question: will the Oculus Rift make you sick? Not sure. I did not get any eye strain from the display; that’s due to the excellent optics, and the seemingly good calibration. I did, however, get a pronounced feeling of dizziness from walking (or rather gliding) through the Tuscany demo. Whether that’s due to the lack of positional head tracking, the increased field of view (not larger than the CAVE’s, though), subtle miscalibration, the WASD control scheme, or motion blur, I cannot tell. But I have never gotten dizzy from my other environments, so I need to look into that once the Rift works in Vrui.
The Oculus Rift dev kit is very good hardware. It has great handling, excellent optics, a great field of view, an OK screen, good orientational head tracking, and a solid, albeit low-level, SDK. Once I can integrate it into my VR infrastructure, it will be a major improvement over the HMDs I already have.
Final observation: the Tuscany demo has lens flare. Very pronounced lens flare. Riddle me this: if the Oculus Rift simulates a naked-eye person walking through an environment, where does the lens flare come from? Fortunately it was easy to turn off via the Unity3D development environment. I didn’t know J.J. Abrams worked for Oculus. Or did someone get a lens flare plug-in for Christmas?