First impressions from the Oculus Rift dev kit

My friend Serban got his Oculus Rift dev kit in the mail today, and he called me over to check it out. I will hold back a thorough evaluation until I get the Rift supported natively in my own VR software, so that I can run a direct head-to-head comparison with my other HMDs, and also my screen-based holographic display systems (the head-tracked 3D TVs, and of course the CAVE), using the same applications. Specifically, I will use the Quake ||| Arena viewer to test the level of “presence” provided by the Rift; as I mentioned in my previous post, there are some very specific physiological effects brought out by that old chestnut, and my other HMDs are severely lacking in that department, and I hope that the Rift will push it close to the level of the CAVE. But here are some early impressions.

Figure 1: What it would look like to unbox an Oculus VR dev kit, if one were to have such a thing.

Outward appearances

The Oculus team went all out on production values. The dev kit comes in a custom fitted black plastic carrying case (see Figure 1), and the unit itself looks much more refined than I would have expected of a dev kit. It’s black matte plastic all the way around, from the headset itself to the controller box.

The hardware

This is, of course, the most important bit. And I’m very relieved that the Oculus Rift is as good as I had hoped. It’s surprisingly light, and the “ski goggle” design, which had me slightly worried, actually works. One unexpected benefit of the design is that it’s possible to put on and take off the unit without having to deal with the head straps, just by holding it up to one’s face, and still get the optimal view. That’s very important while messing around or debugging software; the HMDs I have (“ruggedized” eMagin Z800 Visor and Sony HMZ-T1) are way too cumbersome to put on or take off. Yet another benefit of the design is that it really only fits in one particular spot, so it requires much less mucking around to position right and get a good view of the screens. When using the straps, the display sits tight. Even shaking my head didn’t dislodge it, or shift it out of the optimal viewing position. Given that these are soft elastic straps and not the head vises found in many other HMDs, that’s very good. I’m not saying the Rift is exactly pleasant to wear, but compared to it, my Z800 is a headcrab.

On to the optics. I am utterly impressed by the optical properties of the lenses, especially considering how strong they are. Once the display sits properly (and it’s easy to seat), the entire screen area is in focus and clear. This is very different from my Z800, where I’ve yet to find a position where the screens are entirely in focus, and even from the HMZ-T1 with its better optics. There is very little chromatic aberration; I only saw some color fringes when I started looking for them. Given that the Rift’s field of view is more than twice that of the Z800 and HMZ-T1, it’s an amazing feat.

Compared to the lenses’ total win, the screen itself is a bit more dodgy. And it’s not just the resolution. For comparison: the Z800 has 800×600 pixels per eye, the HMZ-T1 has 1280×720 pixels per eye. The Oculus Rift dev kit has 640×800 pixels per eye (the consumer version is supposed to get 960×1080 pixels per eye). Now, the Rift has significantly more solid angle real estate over which these pixels are spread, so it is comparatively low-res, but that didn’t really bother me. No, the Rift’s problem is that there are small black borders around each pixel, which feels like looking through a screen door attached to one’s face all the time. I found that quite distracting and annoying, and I hope it will get fixed.

The built-in orientational head tracker is another solid showing. It feels very precise, and I did not notice any drift. There were reports of yaw drift, and that the included magnetic compass was not part of the tracker’s sensor fusion algorithm, but I didn’t notice anything bad. I did notice that, when I was turning my head quickly left and right, the horizon tilted quite a bit during those motions, but it was only slightly distracting. This might be caused by the sensor itself, or by the current iteration of the sensor driver.

The software

These early observations are based on the demo software that comes with the dev kit, specifically the Tuscany demo (both the Unity3D and native SDK version), and the very simple — yet ultimately most interesting — “tiny room” demo.

Before going on, a warning. When we first fired up the Tuscany demo from within Unity3D, I was ready to toss the Rift into the garbage and call it a day. For some reason, the scene player did not properly open its rendering window on the Rift’s screen (the fullscreen button only maximized the window, but didn’t remove the decoration), and as a result the field of view and aspect ratios didn’t line up properly. The entire scene was wobbling like so much Jell-O, and the architecture came straight from the nightmares of Dr. Caligari. It was an express train to vomittown. Fortunately, some googling and twiddling with settings got this sorted out, and then everything was fine. Still, not a pleasant out-of-box experience.

First stop, calibration. This is a big issue for any HMD, as miscalibration is (in my humble opinion) a larger cause for motion sickness than lag. I cannot judge overall calibration yet; these demos are not “hands on” enough to really evaluate that. I’ll have to wait until I can run the Nanotech Construction Kit, or the Virtual Clay Modeler. What I can judge is lens distortion correction, and on that the developers did a bang-up job. Straight lines in the virtual world look perfectly straight (this is where the “tiny room” demo with its high-frequency and linear artificial textures came in handy).

The Rift (or rather its SDK) does lens correction via post-processing. First, the virtual world is rendered into a “virtual” camera image, which is then resampled using a simple radial undistortion formula based on a quadratic polynomial. The fundamental problem with this approach is that it has to resample a 1280×800 pixel image into another 1280×800 pixel image, which requires very good reconstruction filters to pull off. The SDK’s fragment shader simply uses bilinear filtering, which leads to a distinct blurriness in the image, and doesn’t seem to play well with mipmapping either (evidenced by “sparkliness” and visible seams in oblique textures). The SDK code shows that there are plans to increase the virtual camera’s resolution for poor-man’s full-scene antialiasing, but all related code is commented out at the moment. For comparison, we turned off lens distortion correction, and the resulting views seemed significantly crisper (albeit distorted, duh). Interestingly, lens distortion was a lot less pronounced than I had expected, given the Rift’s wide field of view and the big talk of higher foveal resolution — still, turning off lens correction is not something you’d ever want to do; it was just a test.

Another nice calibration-related side effect of the great wide-angle lenses is that it’s easy to tell whether the display sits right when inside an application. Even small shifts from the ideal position lead to very noticeable radial distortion in the views; this should make it easier to train users to put on the display correctly. There is nothing worse than miscalibration that’s too subtle to detect consciously, but still strong enough to cause nausea. The final SDK could include a “splash screen” of sorts that displays a grid on both eyes, and asks the viewer to slightly shift the unit until all grid lines look straight. That should work just fine.

On to latency. The first thing I noticed in the Tuscany demo was too much motion blur. We dug through the code, but could not find out what exactly is responsible for it. It is possible that the display screen can’t switch pixels fast enough, meaning that it’s a hardware effect that can’t be addressed, but wildly moving windows around while the Rift was mirroring the desktop didn’t appear to blur as much. So we believe it’s an intentional effect, buried somewhere in the SDK code where we haven’t looked yet. It’s possible that there is a recursive low-pass filter “hidden” in the lens distortion correction shader, enabled by some other bit of code globally enabling alpha blending with constant opacity. Must investigate further. (By the way: major kudos on providing the SDK source.) Update: It appears I was wrong, and hunting for motion blur code in the SDK was a wild goose chase. I forgot — and that’s embarrassing — that the Z800′s display is based on OLED, with 100x-1000x faster response times than LCD. So the motion blur that surprised me might just be the nature of the LCD beast. But let me make this clear: while I noticed the motion blur, and — had it been intentional — would have dialed it back, it was in no way a show stopper.

But apart from that, the display felt “snappy.” I would say total latency is comparable to the HMD-based system I’m showing in my latest video, maybe a tad higher. But that could just be the motion blur. I won’t speculate more until I can run a fair test, but I will say that the motion prediction time delta in the tracking driver code appears to be set to 50ms.

A word about the SDK in general. It contains everything needed to get software running on the Oculus Rift (that’s the plan, obviously), but I’m skeptical it will lead to software that’s portable to other display systems (obviously, that’s the plan, too). For example, the camera model embodied by the SDK does not generalize to projection-based environments. Doesn’t really surprise me, and doesn’t matter to me either — once the Rift works with Vrui I won’t look back — but it’s still noteworthy. Game developers who want to support the Rift and something else will have to work hard.

Aside from that, the SDK is decidedly low-level, and that’s not a surprise, either. The danger here is that each developer will have to roll their own interaction and navigation methods, and that can be a very bad thing. My canonical analogy is mouse look in desktop first-person games: imagine a world where half of all games move the view up when a player pushes the mouse forward, and the other half move the view down, with no way to customize that. Now imagine that a thousand times worse. I’m hoping that interaction standards will emerge very quickly, and the Unity3D binding might help with that.

Total system

On to the big question: will the Oculus Rift make you sick? Not sure. I did not get any eye strain from the display; that’s due to the excellent optics, and the seemingly good calibration. I did, however, get a pronounced feeling of dizziness from walking (or rather gliding) through the Tuscany demo. Whether that’s due to the lack of positional head tracking, the increased field of view (not larger than the CAVE’s, though), subtle miscalibration, the WASD control scheme, or motion blur, I cannot tell. But I have never gotten dizzy from my other environments, so I need to look into that once the Rift works in Vrui.

TL;DR

The Oculus Rift dev kit is very good hardware. It has great handling, excellent optics, a great field of view, an OK screen, good orientational head tracking, and a solid, albeit low-level, SDK. Once I can integrate it into my VR infrastructure, it will be a major improvement over the HMDs I already have.

Final observation: the Tuscany demo has lens flare. Very pronounced lens flare. Riddle me this: if the Oculus Rift simulates a naked-eye person walking through an environment, where does the lens flare come from? Fortunately it was easy to turn off via the Unity3D development environment. I didn’t know J.J. Abrams worked for Oculus. Or did someone get a lens flare plug-in for Christmas?

43 thoughts on “First impressions from the Oculus Rift dev kit

  1. Superb review.

    Interesting take on the blur movement issue.

    If it is a shader effect perhaps it is an unintentional one and a different approach to the lens distortion may cure.

    Have you tried using the Vereio driver or any of the standard 3D drivers with SBS as output to compare motion blur? That should isolate whether its a post processing effect or inherent screen issue.

    Still waiting for mine to have a tinker.

    • There wasn’t enough time to look into it in detail. My hunch is that it’s intentional to lessen display judder, but that it’s a bit overdone (at least for my taste). Once I have the Rift working in my software (where I’ll do my own perspective projection and lens correction), I’ll know for sure.

      As an aside, there isn’t really an alternative to the lens correction method the SDK uses. The only “correct” approach is per-pixel ray tracing, but that’s still not fast enough today, and doesn’t mesh with the architectures of existing game engines.

  2. Very nice review! Your knowledge of the subject of VR in this context sets apart this review from almost all others. Useful. Informative. Clear.

  3. Except where he mentions an out of box experience with a Dev Kit, it has no out of box experience as it is a Dev kit. You shouldn’t even write about that, it’s like saying it doesn’t make juice…

    • I was afraid readers might misunderstand that paragraph as a criticism of the Rift. It’s not; it’s a cautionary tale for people who get the dev kit, plug it in, and put it on. Better make sure the software environment — particularly the Unity3D development environment — is working properly before you do that. Because I didn’t do that, literally the first impression I had was “this is the worst thing ever.” I immediately knew what was wrong, but others might not, or worse, assume that’s how it’s supposed to be. We didn’t see any warning anywhere that said “make sure the display is entirely full-screen, with no window decorations, before putting on the unit.”

      The jab about “out-of-box experience” is just snark, and I stand by it.

  4. Dont get me started on lens flares in first-person games!

    Nice writeup of your first impressions though, I look forward to seeing what you do with it :)

    • My other HMDs run at 60 Hz too, and they don’t show the blur. They show a bit of judder, because my software doesn’t do any explicit temporal sampling, but I find (slight) judder less noticeable. I think it’s too early for me to really come to a conclusion. For now, the blur is just something I noticed, and found curious. It looked like intentional averaging over several past video frames.

  5. Good review!

    While I can’t say for sure, I have a hunch that much of the motion blur is caused by a lack of Response Time Compensation (RTC, or “overdrive”). PC monitors and TVs generally offer such a feature, but it seems to be far less common among mobile devices. Since LCDs without RTC can have grey-to-grey pixel switching times in the neighborhood of 25ms or more, you could imagine how big of an issue this could be when an entire frame lasts only 16ms.

    Eliminating erroneous horizontal tilt would probably require a more sophisticated motion tracking algorithm, as the accelerometer is used to determine orientation in the pitch and roll axes, and is therefore susceptible to errors due to acceleration forces not caused by gravity. Having a “tighter” fusion algorithm between the gyro and the accelerometer should be able to almost completely eliminate it (use of the magnetometer wouldn’t hurt either). I have little doubt this will get better with time. As for the LCD motion blur getting better with time, I’m not so sure. It could certainly get somewhat better, but after reading a transcript of Michael Abrash’s GDC talk, I can’t help but to think LCD is a dead-end barring some major breakthrough in the technology.

    edit: I just checked the datasheet for a RealTek display interface chip (the brand used in the Rift’s control box) and it appears to have a feature called “Dynamic Overshoot” which I believe is their name for RTC. I wasn’t sure if RTC was performed by the interface chip, or by the driver chips on the panel itself. Even though the datasheet makes it sound like the former, I don’t see how that would be possible. Either way, I don’t believe that it’s being used.

    • Thank you, that’s excellent additional information! Serban just reported back after a much longer look at the SDK source, and says there’s no trace of temporal averaging. It’s starting to look like I was simply wrong about this.

      And I forgot a major fact: the Z800′s display engine is OLED, which has much faster (about 100x-1000x) response times than LCD.

      The relatively long switch times of LCDs are a major pain in the buns for my work. Initial 3D TVs had DLP projection engines, and there was almost no stereo crosstalk at all. Now that everyone is doing LCD, it’s much more noticeable — depending on the exact TV model, sometimes so much that approaches that work elsewhere stop working. Don’t get me wrong, for TVs LCD is better than DLP in every other respect, but I’m bummed that there is no “perfect” stereo TV system yet.

      I agree on the sensor driver. I believe it’s a software issue that will get better.

  6. I’m interested in whether or not the screen can be changed for a 5″ screen.. as it sounds like the big drawback is the markup to the larger 7″ screen has increased the dreaded fly screen effect… does anyone know what sort of connection is used on the screen ?

    • That would be weird, but who knows. Normally, the border around pixels is a manufacturing limit. If you make the pixels bigger, the border stays about the same, so relatively speaking, it gets smaller. In other words, I would expect a 7″ screen with 1280×800 pixels to have less screen door effect than a 5″ screen at 1280×800 pixels. But if the two screens have different manufacturers, or different processes, then all bets are off.

  7. Excellent and helpful review – thank you. Hopefully you’ll be one of the first to get the release version too so we can see how that measures up and how many of the issues it addresses.
    For folk wanting to get a headset for first forays into VR, do you have a headset you’d recommend most out of the ones commercially available right now?

    • I’m not really a headset expert. The ones I have are not even in the same class as the Rift, so buying one of those now would be a waste. I think you can still get in on the Rift dev kits, and it’s probably the best option right now, as long as you’re aware of the issues it has.

  8. Pingback: Astounding Oculus Rift demo with Kinect, Razr Hydra, positional tracking

  9. Pingback: Early Oculus Rift Review from Experts - Road to VR

  10. When I looked at the sun in the Tuscany demo my first thought was that I can finally look at the sun without it burning my eyes!

  11. Great review. Appreciate the detail. I don’t remember seeing a “suggested improvements” section in the forum; it would be good to capture some of the ones you have suggested. I’ll post a link up to this article my blog later in the week – the other demo using the Kinect was awesome too. Completely separately, do you have any views on using the Hydra controller as a relatively cheap and easily accessible device for head position tracking? (like in the second video here: http://www.oculushut.com/blog/track-my-noggin-with-a-hydra-and-screw-the-gloves-real-men-use-bare-hands/)

    • The Hydra is a very good input device, if treated properly. As I mentioned in a previous post, the Hydra is locally, but not globally precise. There are significant non-linearities in its EM field that can be corrected in a laboratory setting, but not on a “normal” end user’s desk. As a result, it should be represented in applications as an indirect input device, meaning its virtual representation does not coincide with its physical position.

      Additionally, I noticed some orientation-dependent distortion that would cause problems for head tracking. When rotating a handle around a fixed point, the reported positions change significantly.

      The bottom line is that the Hydra is a good input device, but for head tracking, I believe an optical system would be superior.

  12. Pingback: vj news 06/04/2013 | vjunk

  13. Pingback: Oculus Rift first impressions (doc-ok.org) – bluntbody

  14. “The Oculus Rift dev kit has 640×800 pixels per eye (the consumer version is supposed to get 960×1080 pixels per eye)” >> This concerns me. 960 x 1080 for the consumer version? Isn’t that quite low res still? Also given that the width is shorter than the depth.. are we looking at a picture that’s tall and narrow?

    • It’s not exactly high-res, but I think it will be fine. The low resolution of the dev kit didn’t bother me very much while using it, and the consumer version will have more pixels per eye than my eMagin Z800 (800×600) and about the same number as my Sony HMZ-T1. Granted, both have much smaller fields of view, so their resolutions in terms of pixels per solid angle are higher.

      In terms of picture aspect ratio, the upper and lower borders of the screen are mapped somewhat outside the field of view. Overall, the Rift’s image appears circular, due to the strong lenses. At least for me, the display covered most of my field of view as long as I didn’t peek out of the corners of my eyes (which is discouraged by design, as the lenses will blur the image).

      I have tried a high-end HMD with a ridiculous number of pixels, and it turns out other factors are more important than raw pixel numbers. I think that if the Rift will end up with a problem at all, it will be on the software side (bad drivers or VR embeddings, low immersion) rather than the hardware side.

      • Great! That’s very encouraging. Thanks for explaining it. Many years ago I had the IO Systems PC3D. That was rated at 800×600. Old technology now. The LCD screens had such poor contrast that the image wasn’t good enough for most games. Field of view was also very small. Technology has moved on now so I look forward to the consumer version of the oculus. I’d purchase the dev kit but I don’t see much point if there’s going to be a long wait for delivery.

  15. Hi Dok,

    as this article is a bit old I tried to find a way to PM you but didn’t see one so I’ll add a comment here instead.

    I just got my Rift last week and have been struggling with vr/motion sickness. I have started reading medical articles on the web to see what could be done to minimize this problem. Just got some tablets from the Pharmacist, who didn’t blink when I said it was for a Virtual Reality headset.

    I would love to hear your thoughts and experience on the subject as you must have come across this in your work?

    Cheers
    Fred

    • I’m saying a bit more in the other article, but at the core there are three potential problems: latency, poor calibration, and lack of positional head tracking. Lack of head tracking means that the virtual world is affixed to your head. Even when using an environment where you’re sitting in a virtual cockpit and don’t really move around, like a driving or flight sim, you still end up moving your head to peek around, and not being able to do that is disturbing (I’ve recently tried). Latency is the boogyman of VR, but I think bad calibration is at least as bad. In the rift, the device is very well calibrated internally. But you still have to measure and input your precise interpupillary distance, or distances along your viewing direction will be distorted. When I started working seriously with the Rift, I ran with the default 2.5″ IPD, and quickly noticed that it was too low for me (I only noticed because I was looking for signs of wrong IPD; otherwise I’d just gotten sick) and after some experimentation I set it to closer to 2.75″.

      Bottom line: if you work with the vanilla dev kit, and don’t happen to have a 2.5″ IPD, and tend to move your head around while looking, you’re probably going to get motion sickness.

  16. Pingback: Oculus Rift等まとめ | VRとかのニュース

  17. Pingback: Vrui on (in?) Oculus Rift | Doc-Ok.org

  18. Pingback: Happy birthday, doc-ok.org! | Doc-Ok.org

  19. Pingback: One of the finest #oculusrift  … | Bytewriter

  20. Pingback: A Closer Look at the Oculus Rift | Doc-Ok.org

  21. Pingback: A Positive Outcome of the Facebook Oculus Acquisition | Doc-Ok.org