This is a post about Vrui

I just released version 3.0 of the Vrui VR toolkit. One of the major new features is native support for the Oculus Rift head-mounted display, including its low-latency inertial 3-DOF (orientation-only) tracker, and post-rendering lens distortion correction. So I thought it’s time for the first (really?) Vrui post in this venue.

What is Vrui, and why should I care?

Glad you’re asking. In a nutshell, Vrui (pronounced to start with vroom, and rhyme with gooey) is a high-level toolkit to develop highly interactive applications aimed at holographic (or fully-immersive, or VR, or whatever you want to call them) display environments. A large selection of videos showing many Vrui applications running in a wide variety of environments can be found on my YouTube channel. To you as a developer, this means you write your application once, and users can run it in any kind of environment without you having to worry about it. If new input or output hardware comes along, it’s Vrui’s responsibility to support it, not yours.

Continue reading

Vrui on (in?) Oculus Rift

I wrote about my first impressions of the Oculus Rift developer kit back in April, and since then I’ve been working (on and off) on getting it fully and natively supported in Vrui (see Figure 1 for proof that it works). Given that Vrui’s somewhat insane flexibility is a major point of pride for me, what was it that I actually had to create to support the Rift? Turns out, not all that much: a driver for the Rift’s built-in inertial tracking unit and a post-processing filter to correct for the Rift’s lens distortion were all it took (more on that later). So why did it take me this long? For one, I was mostly working on other things and only spent a few hours here and there, but more importantly, the Rift is not just a new head-mounted display (HMD), but a major shift in how HMDs are (or will be) used.

Figure 1: The trademark “double-barrel” Oculus Rift screenshot, this time generated by a Vrui application.

Continue reading

The Kinect 2.0

Details about the next version of Microsoft’s Kinect, to be bundled with the upcoming Xbox One, are slowly emerging. After an initial leak of preliminary specifications on February 20th, 2013, finally some official data are to be had. This article about the upcoming next Kinect-for-Windows mentions “Microsoft’s proprietary Time-of-Flight technology,” which is an entirely different method to sense depth than the current Kinect’s structured light approach. That’s kind of a big deal.

Figure 1: The Xbox One. The box on top, the one with the lens, is probably the Kinect2. They should have gone with a red, glowing lens.

Continue reading

On the road for VR: zSpace developers conference

I went to zCon 2013, the zSpace developers conference, held in the Computer History Museum in Mountain View yesterday and today. As I mentioned in my previous post about the zSpace holographic display, my interest in it is as an alternative to our current line of low-cost holographic displays, which require assembly and careful calibration by the end user before they can be used. The zSpace, on the other hand, is completely plug&play: its optical trackers (more on them below) are integrated into the display screen itself, so they can be calibrated at the factory and work out-of-the-box.

Figure 1: The zSpace holographic display and how it would really look like when seen from this point of view.

So I drove around the bay to get a close look at the zSpace, to determine its viability for my purpose. Bottom line, it will work (with some issues, more on that below). My primary concerns were threefold: head tracking precision and latency, stylus tracking precision and latency, and stereo quality (i.e., amount of crosstalk between the eyes).

Continue reading

The reality of head-mounted displays

So it appears the Oculus Rift is really happening. A buddy of mine went in early on the kickstarter, and his will supposedly be in the mail some time this week. In a way the Oculus Rift, or, more precisely, the most recent foray of VR into the mainstream that it embodies, was the reason why I started this blog in the first place. I’m very much looking forward to it (more on that below), but I’m also somewhat worried that the huge level of pre-release excitement in the gaming world might turn into a backlash against VR in general. So I made a video laying out my opinions (see Figure 1, or the embedded video below).

Figure 1: Still from a video describing how head-mounted displays should be used to create convincing virtual worlds.

Continue reading

When the novelty is gone…

I just found this old photo on one of my cameras, and it’s too good not to share. It shows former master’s student Peter Gold (now in the PhD program at UT Austin) working with a high-resolution aerial LiDAR scan of the El Mayor-Cucapah fault rupture after the April 2010 earthquake (here is the full-resolution picture, for the curious).

Figure 1: Former master’s student Peter Gold in the CAVE, analyzing a high-resolution aerial LiDAR scan of the El Mayor-Cucapah fault rupture.

Continue reading

Intel’s “perceptual computing” initiative

I went to the Sacramento Hacker Lab last night, to see a presentation by Intel about their soon-to-be-released “perceptual computing” hardware and software. Basically, this is Intel’s answer to the Kinect: a combined color and depth camera with noise- and echo-cancelling microphones, and an integrated SDK giving access to derived head tracking, finger tracking, and voice recording data.

Figure 1: What perceptual computing might look like at some point in the future, according to the overactive imaginations of Intel marketing people. Original image name: “Security Force Field.jpg” Oh, sure.

Continue reading

Virtual clay modeling with 3D input devices

It’s funny, suddenly the idea of virtual sculpting or virtual clay modeling using 3D input devices is popping up everywhere. The developers behind the Leap Motion stated it as their inspiration to develop the device in the first place, and I recently saw a demo video; Sony has recently been showing it off as a demo for the upcoming Playstation 4; and I’ve just returned from an event at the Sacramento Hacker Lab, where Intel was trying to get developers excited about their version of the Kinect, or what they call “perceptual computing.” One of the demos they showed was — guess it — virtual sculpting (one other demo was 3D video of a person embedded into a virtual office, now where have I seen that before?)

So I decided a few days ago to dust off an old toy application (I showed it last in my 2007 Wiimote hacking video), a volumetric virtual “clay” modeler with real-time isosurface extraction for visualization, and run it with a Razer Hydra controller, which supports bi-manual 6-DOF interaction, a pretty ideal setup for this sort of thing:

Continue reading

Low-cost VR for materials science

In my ongoing series on VR’s stubborn refusal to just get on with it and croak already, here’s an update from the materials science front. Lilian Dávila, former UC Davis grad student and now professor at UC Merced, was recently featured in a three-part series about cutting-edge digital research at UC Merced, produced by the PR arm of the University of California. Here’s the 10-minute short focusing on her use of low-cost holographic displays for interactive design and analysis of nanostructures:

Continue reading

Of CAVEs and Curiosity: Imaging and Imagination in Collaborative Research

On Monday, 03/04/2013, Dawn Sumner, one of KeckCAVES‘ core members, gave a talk in UC Berkeley‘s Art, Technology, and Culture lecture series, together with Meredith Tromble of the San Francisco Art Institute. The talk’s title was “Of CAVEs and Curiosity: Imaging and Imagination in Collaborative Research,” and it can be viewed online (1:12:55 total length, 50 minutes talk and 25 minutes lively discussion).

While the talk is primarily about the “Dream Vortex,” an evolving virtual reality art project led by Dawn and Meredith involving KeckCAVES hardware (CAVE and low-cost VR systems) and software, Dawn also touches on several of her past and present scientific (and art!) projects with KeckCAVES, including her work on ancient microbialites, exploration of live stromatolites in ice-covered lakes in Antarctica, our previous collaboration with performing artists, and — most recently — her leadership role with NASA‘s Curiosity Mars rover mission.

The most interesting aspect of this talk, for me, was that the art project and all the software development for it, are done by the “other” part of the KeckCAVES project, the more mathematically/complex systems-aligned cluster around Jim Crutchfield of UC DavisComplexity Sciences Center and his post-docs and graduate students. In practice, this means that I saw some of the software for the first time, and also heard about some problems the developers ran into that I was completely unaware of. This is interesting because it means that the Vrui VR toolkit, on which all this software is based, is maturing from a private pet project to something that’s actually being used by parties who are not directly collaborating with me.