I wrote about my first impressions of the Oculus Rift developer kit back in April, and since then I’ve been working (on and off) on getting it fully and natively supported in Vrui (see Figure 1 for proof that it works). Given that Vrui’s somewhat insane flexibility is a major point of pride for me, what was it that I actually had to create to support the Rift? Turns out, not all that much: a driver for the Rift’s built-in inertial tracking unit and a post-processing filter to correct for the Rift’s lens distortion were all it took (more on that later). So why did it take me this long? For one, I was mostly working on other things and only spent a few hours here and there, but more importantly, the Rift is not just a new head-mounted display (HMD), but a major shift in how HMDs are (or will be) used.
In ye olden days, HMDs were extremely expensive and finicky devices. As a result, they were almost always found as components of larger, fully-integrated, display environments. Take the environment shown in this recent video of mine: the HMD (an eMagin Z800 Visor) is just a part of the system; the other big component is a high-end and carefully calibrated external tracking system with 6-DOF input devices (an InterSense IS-900 SimTracker). This means, from a user interface point of view, HMDs could be treated exactly the same as CAVEs or other large-scale projection-based holographic displays: a large workspace in which the user can stand or walk, and directly interact with virtual 3D objects by touching them with input devices. The fact that users are wearing the screens on their faces is a mere technicality.
But the Rift is different, by virtue of being cheap and aimed at a mainstream market. My guess is that 90% of Rifts sold will end up being completely stand-alone additions to desktop computers; only 9% will have some form of 6-DOF input device (such as a Razer Hydra), and the leftover 1% will have something that could be considered an integrated environment with calibrated head and input device tracking.
The bottom line is that a majority of Rifts will be used with only mice and keyboards as input devices, and that right there is a major challenge for a VR development toolkit. Aiming for the 1% is trivial, aiming for the 10% is pretty straightforward (as evidenced in this series of videos, see below for the first one), but the other 90% are what’s kept me from pushing out a new version of Vrui with Rift support.
The main reason for Vrui’s portability between vastly different display environment types (from laptop to CAVE) is that its user interface layer is implemented as a loose collection of atomic and orthogonal components. There is no “CAVE mode” or “desktop mode” in Vrui; what there is is a collection of tiny components that, when connected in just the right way, create something that feels exactly as if Vrui were written specifically for a desktop or a CAVE or whatever. The huge benefits of this architecture are that the overall number of components is minimized (there is no “combinatorial explosion”), and if a new type of environment comes along, there is a very good chance that a native-feeling “mode” for this environment can be assembled by creatively rearranging existing pieces. And in the past, that has worked out swimmingly.
But what I did not expect was that a fully 3D (the final consumer version, if it will indeed have built-in positional head tracking, will be truly holographic) display system will be used with the most two-dimensional of input devices, keyboard and mouse, by a majority of users. And because I did not think of that, certain assumptions snuck into the design of, primarily, the mouse interaction layer. Deep down in the software, the mouse input device adapter assumes that a mouse is tracked inside a (2D) display window, and that it interacts with objects and GUI widgets in the 3D plane of a screen that is associated with that display window. Fortunately, this assumption still held with stereoscopic screens, such as 3D TVs without tracking systems. But it broke down immediately when it encountered an HMD, where there are two screens mapped to a single window, like the split screen in the Rift, and where the screens are very close to the viewer’s eyes. As a result, Vrui’s existing mouse interface layer, applied to a Rift, projects its mouse cursor right into the viewer’s eye, and menus and other GUI components show up so close to the viewer that it’s impossible to focus on them.
Now, this would be fairly easy to work around with a Rift-specific mouse input device adapter, but then next year there might be another HMD that’s slightly different, and another single-purpose hack, and that way insanity lies. So what I’ve been trying to do is break apart the peculiarities of the Rift, or rather the types of display environments that it engenders, into a new set of atomic and orthogonal components, which will then support many types of similar environments in the future. But the truth is, it’s tricky. So while I have the Rift working from a technical perspective, I don’t have a working mouse and keyboard interface for it yet. And I don’t want to advertise “Full Oculus Rift Support!” until I do. Vrui is not just a low-level SDK that exposes hardware and graphics contexts and leaves the rest to developers; it is a unified development toolkit that supports creating highly interactive applications without having to worry about specific target environments. If it won’t work 100% transparently with Rift+Mouse, it won’t work with Rift at all.
So then… what’s the current state?
Like I said, Vrui supports the Rift for those 10% of users that have some form of 3D tracking system. It looks like the new emerging consumer VR tracking standard is the Razer Hydra, and fortunately, Vrui has had very good drivers for the Hydra since 2011. Orientational head tracking of the Rift is handled by the new native Rift tracking driver module for Vrui’s device driver, VRDeviceDaemon. There is some configuration involved to get the Hydra and the Rift into the same coordinate system, but it’s quite straightforward, and the majority of setup for “Rift + Hydra mode” can be set up by pre-fab configuration files. Meaning, end-user setup is quite trivial.
For those 1% of users who want to use a Rift as a drop-in HMD replacement for an existing fully integrated display environment, it’s trivial. Just stick a tracker to the headset, measure the offset, and you’re good to go.
Proper mouse support and GUI layout management is still some ways off. It will be a relatively small matter of programming, but unless I have at least some idea of what the final architecture will be like, I won’t start hacking things up left and right.