I’ve been getting a lot of questions about using the Rift DK2 under Linux with Vrui recently, so I figured I’d post a little progress report here instead of answering them individually.
The good news is that I have the DK2 working to the level of the DK1, i.e., I have orientational tracking, lens distortion correction, and chromatic aberration correction. I also have low persistence, but that came for free.
What I don’t have, and most probably won’t have until an official Linux SDK drops, is positional tracking. In order to replicate the work a team of computer vision experts at Oculus have been doing for the last year or so, I’d need a few clones and a time machine. That said, I am working on combining the DK1/DK2’s built-in IMU with other external tracking systems, such as Intersense IS-900 or NaturalPoint OptiTrack. That’s a much easier (but still tricky) problem, and would allow using the Rift as a headset for large-area VR. Probably not interesting for home users, but being able to walk around freely in an 18’x10’x7′ volume opens up entirely different VR applications.
I’m currently working hard on the next release of the Vrui toolkit (version 3.2-001), which will have at least the level of DK2 support that I have internally now (combined tracking might or might not make it, but that can already be faked, see 3D Video Capture With Three Kinects).
The reason why I’m not releasing right now is that I’m still trying to optimize the “user experience” by integrating the ideas I described in A Trip Down the Graphics Pipeline. The idea is that plugging in a Rift and starting a Vrui application should just work. I have most of that going; the only issue is telling OpenGL to sync to the vertical retrace on the Rift’s display, no matter what. Right now that can only be done via environment variable, and I’m looking for the right place in Vrui to set that variable from inside a program. It’s a work-around until Nvidia expose that functionality via their NV-CONTROL X extension, or, even better, via a GLX extension (are you listening, Nvidia?). Or, why not change the implementation of GLX_SGI_video_sync, which is already bound to a display and drawable, such that it always syncs to the first video controller servicing that drawable? Wouldn’t even require a specification change. Just an idea.
And last but not least, once I got the DK2 and its low-persistence screen working, I realized how cavalier I’ve been about low-level timing issues in Vrui. With screen-based VR and LCD-based HMDs it has simply never been an issue before, but now it’s pretty obvious. Good thing is, I think I have a handle on it.
In summary: it’ll be a little bit longer, but I’m on it. Will I be able to release before Oculus does their Linux SDK? Sure hope so! And just in case you think I’ve been sitting on my hands for the last six months: there are already about 300 large and small changes between 3.1-002 and 3.2-001.
And here is today’s unrelated picture:
Do you have access to the video feed from the DK2?
Are the LEDs always on, or do they have to be turned on by software? And are they multiplexed like how the old P5 glove did or the distinction between which hotspot is which is done geometrically?
How hard would it be to just use another camera and have your software look for the pattern of dots with any IR camera, using the orientation data to help narrow down which the possibilities for which way it’s oriented, or even just simply take the center of the blob/bounding box of all LEDs for 2d position, and the size, taking in consideration the expected proportions calculated based on the orientation, for depth? If the issue is latency due to the overhead of the pattern recognition algorithm, you could still use it to remove the drift produced by the IMU-based dead-reckoning , no?
Supposedly the tracking cam has a standard webcam interface, but so far I’ve only received bogus data.
They need to be turned on, and I don’t know how to do that yet, and they can be set up for arbitrary patterns, and they are strobed in sync with the camera (via the extra sync cable) to reduce motion blur. This is all done via the DK2’s HID protocol, but I’ve only made little progress reverse-engineering that so far.
Hard. The very basics are straightforward, but getting it to run at the required latency and robustness, and to be smooth in all cases, is a major undertaking. Look at how long it’s taken Oculus, and who they have working on it. 🙂 The problem with HMD head tracking is that if you get it slightly wrong, users will get sick.
You are right, though, in that having orientational tracking from the HMD makes camera tracking much easier, because you have a very good starting point for orientation estimation and matching observed LED blobs and 3D LED positions. It’s a trick I used in my old Wiimote tracking driver, and the same thing is making it easier to combine external tracking data from OptiTrack, which isn’t so great with orientation, with the Rifts built-in IMU.
Hooray!
It shouldn’t actually be entirely impossible to raise some funds for the cloning operation and a tie masheen.
Amazing !!!
Pingback: #82: Oliver “Doc_Ok” Kreylos on VR input device speculation & sensor fusion combinations, developing a Linux SDK, VR video rending pipeline, & building a digital humanities VR research lab at UC Davis | Judderverse VR