Vrui on (in?) Oculus Rift

I wrote about my first impressions of the Oculus Rift developer kit back in April, and since then I’ve been working (on and off) on getting it fully and natively supported in Vrui (see Figure 1 for proof that it works). Given that Vrui’s somewhat insane flexibility is a major point of pride for me, what was it that I actually had to create to support the Rift? Turns out, not all that much: a driver for the Rift’s built-in inertial tracking unit and a post-processing filter to correct for the Rift’s lens distortion were all it took (more on that later). So why did it take me this long? For one, I was mostly working on other things and only spent a few hours here and there, but more importantly, the Rift is not just a new head-mounted display (HMD), but a major shift in how HMDs are (or will be) used.

Figure 1: The trademark “double-barrel” Oculus Rift screenshot, this time generated by a Vrui application.

In ye olden days, HMDs were extremely expensive and finicky devices. As a result, they were almost always found as components of larger, fully-integrated, display environments. Take the environment shown in this recent video of mine: the HMD (an eMagin Z800 Visor) is just a part of the system; the other big component is a high-end and carefully calibrated external tracking system with 6-DOF input devices (an InterSense IS-900 SimTracker). This means, from a user interface point of view, HMDs could be treated exactly the same as CAVEs or other large-scale projection-based holographic displays: a large workspace in which the user can stand or walk, and directly interact with virtual 3D objects by touching them with input devices. The fact that users are wearing the screens on their faces is a mere technicality.

But the Rift is different, by virtue of being cheap and aimed at a mainstream market. My guess is that 90% of Rifts sold will end up being completely stand-alone additions to desktop computers; only 9% will have some form of 6-DOF input device (such as a Razer Hydra), and the leftover 1% will have something that could be considered an integrated environment with calibrated head and input device tracking.

The bottom line is that a majority of Rifts will be used with only mice and keyboards as input devices, and that right there is a major challenge for a VR development toolkit. Aiming for the 1% is trivial, aiming for the 10% is pretty straightforward (as evidenced in this series of videos, see below for the first one), but the other 90% are what’s kept me from pushing out a new version of Vrui with Rift support.

The main reason for Vrui’s portability between vastly different display environment types (from laptop to CAVE) is that its user interface layer is implemented as a loose collection of atomic and orthogonal components. There is no “CAVE mode” or “desktop mode” in Vrui; what there is is a collection of tiny components that, when connected in just the right way, create something that feels exactly as if Vrui were written specifically for a desktop or a CAVE or whatever. The huge benefits of this architecture are that the overall number of components is minimized (there is no “combinatorial explosion”), and if a new type of environment comes along, there is a very good chance that a native-feeling “mode” for this environment can be assembled by creatively rearranging existing pieces. And in the past, that has worked out swimmingly.

But what I did not expect was that a fully 3D (the final consumer version, if it will indeed have built-in positional head tracking, will be truly holographic) display system will be used with the most two-dimensional of input devices, keyboard and mouse, by a majority of users. And because I did not think of that, certain assumptions snuck into the design of, primarily, the mouse interaction layer. Deep down in the software, the mouse input device adapter assumes that a mouse is tracked inside a (2D) display window, and that it interacts with objects and GUI widgets in the 3D plane of a screen that is associated with that display window. Fortunately, this assumption still held with stereoscopic screens, such as 3D TVs without tracking systems. But it broke down immediately when it encountered an HMD, where there are two screens mapped to a single window, like the split screen in the Rift, and where the screens are very close to the viewer’s eyes. As a result, Vrui’s existing mouse interface layer, applied to a Rift, projects its mouse cursor right into the viewer’s eye, and menus and other GUI components show up so close to the viewer that it’s impossible to focus on them.

Now, this would be fairly easy to work around with a Rift-specific mouse input device adapter, but then next year there might be another HMD that’s slightly different, and another single-purpose hack, and that way insanity lies. So what I’ve been trying to do is break apart the peculiarities of the Rift, or rather the types of display environments that it engenders, into a new set of atomic and orthogonal components, which will then support many types of similar environments in the future. But the truth is, it’s tricky. So while I have the Rift working from a technical perspective, I don’t have a working mouse and keyboard interface for it yet. And I don’t want to advertise “Full Oculus Rift Support!” until I do. Vrui is not just a low-level SDK that exposes hardware and graphics contexts and leaves the rest to developers; it is a unified development toolkit that supports creating highly interactive applications without having to worry about specific target environments. If it won’t work 100% transparently with Rift+Mouse, it won’t work with Rift at all.

So then… what’s the current state?

Like I said, Vrui supports the Rift for those 10% of users that have some form of 3D tracking system. It looks like the new emerging consumer VR tracking standard is the Razer Hydra, and fortunately, Vrui has had very good drivers for the Hydra since 2011. Orientational head tracking of the Rift is handled by the new native Rift tracking driver module for Vrui’s device driver, VRDeviceDaemon. There is some configuration involved to get the Hydra and the Rift into the same coordinate system, but it’s quite straightforward, and the majority of setup for “Rift + Hydra mode” can be set up by pre-fab configuration files. Meaning, end-user setup is quite trivial.

For those 1% of users who want to use a Rift as a drop-in HMD replacement for an existing fully integrated display environment, it’s trivial. Just stick a tracker to the headset, measure the offset, and you’re good to go.

Proper mouse support and GUI layout management is still some ways off. It will be a relatively small matter of programming, but unless I have at least some idea of what the final architecture will be like, I won’t start hacking things up left and right.

12 thoughts on “Vrui on (in?) Oculus Rift

  1. Somewhat off topic: Perhaps it is worth considering PlayStation Move controllers that use a high speed camera and lights for absolute positioning. They are cheap and abundant, and should remedy many of the shortcomings of the hydra, and perhaps function as a cheap way to implement accurate positional head tracking.

    • I give you one guess what I’m building right now!

      (Hint: it’s an LED tracking rig to clip to the Rift’s front panel, to be tracked by a Playstation Eye camera.)

        • Might it be possible to use the LED tracking system to correct for the Hydra calibration as well? There’s a point on the side of each controller between thumb and forefinger where a marker could be placed without creating too much of an issue for most gestures. I wonder if you could calibrate for the local magnetic field and then remove the optical trackers and retain relative accuracy. Looking forward to the next release!

          • Yes, that can be done in principle. You either need two (calibrated) cameras to track a single LED in 3D, or one camera and two LEDs plus an orientation tracker, of four LEDs by themselves.

  2. Pingback: Gemischte Links 6.8.2013 | 3D/VR/AR

  3. PLEASE PLEASE PLEASE give us some instructions and/or sample code!

    I’m guessing most people at this point, just want to load a few cool models and walk around in them like your doom 3 video.

    Forget about fixing the input device problem for now. Devs will find a way.
    (I think you mentioned somewhere that the input libraries where really modular right? In that case I’m sure desperate devs will find a way to hook up cheap wiimotes/ps3moves/smart phone imu sensor apps/web cams as ghetto input devices )

    I’ve downloaded and compiled the Vrui and lidar tools. I’m looking at the VRDeviceDaemon code. How did you get it working with the oculus sensor drivers?

    Where in the code should i look for the rendering settings? (For the 2 viewpoints and barrel distortions).

    • Vrui’s current public release (2.7-001) doesn’t have support for the Rift’s IMU, or for lens distortion correction. I have a packaged 2.8-001 that has both, but I won’t release it — I completely changed the lens correction code again for 2.9-001 (which is currently cooking) so that 2.8’s configuration options are incompatible, and also improved the Rift’s IMU driver.

      The driver, by the way, talks to the IMU directly via USB, so the Oculus SDK is not involved at all.

      Vrui-2.7-001 is easy to set up for split-screen stereo rendering, but without the lens correction, it’s not useful for the Rift.

      Let’s make a deal. I have to make some fixes to get mouse support for the Rift into a state where 3rd party devs could reasonably have a whack at it, and I just found some regressions in existing apps caused by the lens correction shader. Let me fix those things, call it beta, and release it. I promise it won’t be long.

  4. Pingback: Here’s what the immersive, 3D computer interface of the future will feel like – Quartz

Leave a Reply to Libre Cancel reply

Your email address will not be published. Required fields are marked *