I just released version 3.0 of the Vrui VR toolkit. One of the major new features is native support for the Oculus Rift head-mounted display, including its low-latency inertial 3-DOF (orientation-only) tracker, and post-rendering lens distortion correction. So I thought it’s time for the first (really?) Vrui post in this venue.
What is Vrui, and why should I care?
Glad you’re asking. In a nutshell, Vrui (pronounced to start with vroom, and rhyme with gooey) is a high-level toolkit to develop highly interactive applications aimed at holographic (or fully-immersive, or VR, or whatever you want to call them) display environments. A large selection of videos showing many Vrui applications running in a wide variety of environments can be found on my YouTube channel. To you as a developer, this means you write your application once, and users can run it in any kind of environment without you having to worry about it. If new input or output hardware comes along, it’s Vrui’s responsibility to support it, not yours.
So I decided a few days ago to dust off an old toy application (I showed it last in my 2007 Wiimote hacking video), a volumetric virtual “clay” modeler with real-time isosurface extraction for visualization, and run it with a Razer Hydra controller, which supports bi-manual 6-DOF interaction, a pretty ideal setup for this sort of thing:
The most interesting aspect of this talk, for me, was that the art project and all the software development for it, are done by the “other” part of the KeckCAVES project, the more mathematically/complex systems-aligned cluster around Jim Crutchfield of UC Davis‘ Complexity Sciences Center and his post-docs and graduate students. In practice, this means that I saw some of the software for the first time, and also heard about some problems the developers ran into that I was completely unaware of. This is interesting because it means that the Vrui VR toolkit, on which all this software is based, is maturing from a private pet project to something that’s actually being used by parties who are not directly collaborating with me.
But let’s back up a bit. When it comes to VR, there are three prevalent opinions:
It’s a dead technology. It had its day in the early nineties, and there hasn’t been anything new since. After all, the CAVE was invented in ’91 and is basically still the same, and head-mounted displays have been around even longer.
It hasn’t been born yet. But maybe if we wait 10 more years, and there are some significant breakthroughs in display and computer technology, it might become interesting or feasible.
It’s fringe technology. Some weirdos keep picking at it, but it hasn’t ever led to anything interesting or useful, and never will.
One of the mysteries of the modern age is the existence of two distinct lines of graphics cards by the two big manufacturers, Nvidia and ATI/AMD. There are gamer-level cards, and professional-level cards. What are their differences? Obviously, gamer-level cards are cheap, because the companies face stiff competition from each other, and want to sell as many of them as possible to make a profit. So, why are professional-level cards so much more expensive? For comparison, an “entry-level” $700 Quadro 4000 is significantly slower than a $530 high-end GeForce GTX 680, at least according to my measurements using several Vrui applications, and the closest performance-equivalent to a GeForce GTX 680 I could find was a Quadro 6000 for a whopping $3660. Granted, the Quadro 6000 has 6GB of video RAM to the GeForce’s 2GB, but that doesn’t explain the difference.
What I haven’t mentioned before is that we have an even lower-cost, and, more importantly, easier to install, alternative using just a 3D TV and a Razer Hydra gaming input device. These environments are not holographic because they don’t have head tracking, but they are still very usable for a large variety of 3D applications. We have several of these systems in production use, and demonstrated them to the public twice, in our booth at the 2011 and 2012 AGU fall meetings. What we found there is that the environments are very easy to use; random visitors walking into our booth and picking up the controllers were able to control fairly complex software in a matter of minutes.
A user controlling a low-cost 3D display (running the Nanotech Construction Kit) with a Razer Hydra 6-DOF tracked input device.
We are currently involved in an NSF-funded project to study the changes in global ocean flow patterns in response to past climate change, specifically the difference in flow patterns between the last glacial maximum (otherwise known as the “Ice Age”, ~25000 years ago) and the Holocene (otherwise known as “today”).
In layman’s terms, the basic idea is to use differences in the chemical composition, particularly the abundance of isotopes of carbon (13C) and oxygen (18O), of benthiccore samples collected from the ocean floor all around the world to establish correlations between sampling sites, and from that derive a global flow model that best explains these correlations. (By the way, 13C is not the carbon isotope used in radiocarbon dating; that honor goes to 14C).
This is a multi-institution collaborative project. The core sample isotope ratios are collected and collated by Lorraine Lisiecki and her graduate students at UC Santa Barbara, and the mathematical method to reconstruct flow patterns based on those samples is developed by Jake Gebbie at Woods Hole Oceanographic Institution. Howard Spero at UC Davis is the overall principal investigator of the project, and UC Davis’ contribution is visualization and analysis software, building on the strengths of the KeckCAVES project. I’ve posted previously about our efforts to construct low-cost immersive display systems at our collaborators’ sites so that they can use the visualization software developed by us in its native habitat, and also collaborate with us and each other remotely in real-time using Vrui’s collaboration infrastructure.
So here is the first major piece of visualization software developed specifically for this project. It was developed by Rolf Westerteiger, a visiting PhD student from Germany, based on the Vrui VR toolkit. Here is Rolf himself, using his application in the CAVE:
PhD student Rolf Westerteiger using his immersive visualization application in the KeckCAVES CAVE.
This application reads a database of core sample compositions created by Lorraine Lisiecki, and a reconstructed 3D flow field created by Jake Gebbie, and puts both into a global three-dimensional context. The software shows a block model of the Earth’s global ocean floor (at the same resolution as the 3D flow field, and vertically exaggerated by a significant factor), and allows a user to interactively query and explore the 3D flow.
The primary flow visualization method is line integral convolution (LIC), which creates dense and intuitive visualizations of complex flows. As LIC works best when applied to 2D surfaces instead of 3D volumes, Rolf’s application is based on a set of interactively controllable surfaces (one sphere of constant depth, two cones of constant latitude, two semicircles of constant longitude) which slice through the implicitly-defined 3D LIC volume. To indicate flow direction, the LIC texture is animated by cycling through a phase offset, and color-coded by either flow velocity or water temperature.
The special thing about this LIC visualization is that the LIC textures are not pre-computed, but generated in real time using the GPU and a set of GLSL shaders. This allows for even more interactive exploration than shown in this first result; a user could specify arbitrary slicing surfaces using tracked 3D input devices, and see the LIC pattern displayed on those surfaces immediately. From our experience with the 3D Visualizer software, which is based on very similar principles, we believe that this will lead to a very powerful exploratory tool.
A secondary flow visualization method are tracer particles, which can be injected into the global ocean at arbitrary positions using a tracked 3D input device, and leave behind a trail of their past positions. Together, these two methods provide rich insight into the structure of these reconstructed flows, and especially their evolution over geologic time.
A third visualization method is used to put the raw data that were used to create the flow models into context. A set of labels, one for each core sample in the database, and each showing the relative abundance of the important isotope ratios, are mapped onto the virtual globe at their proper positions to enable visual inspection of the flow reconstruction method.
Unfortunately, Rolf had to return to Germany before we were able to film a video showing off all features of his visualization application, so I had to make a video with myself standing in for him:
The next development steps are to replace the ocean floor block model read from the flow file with a high-resolution bathymetry model (see below), and to integrate the visualization application with Vrui’s remote collaboration infrastructure such that it can be used by all collaborators for virtual joint data exploration sessions.
Global high-resolution bathymetry model at 75x vertical exaggeration. View is centered on Northern Atlantic.
I started working on low-cost VR, that is, cheap (at least compared to a CAVE or other high-end system) professional-grade holographic display systems about 4 1/2 years ago, after seeing one at the 2008 IEEE VR conference. It consisted of a first generation DLP-based projection 3D TV and a NaturalPointOptiTrack optical tracking system. I put together my own in Summer 2008, and have been building, or helped others building, more at a steadily increasing rate — one in my lab, one in our med school, one at UC Berkeley, one at UC Merced, one at UC Santa Barbara, a handful more at NASA labs all over the country, and probably some I don’t even know about. Here’s a video showing me using one to explore a CAT scan of a patient with a nasty head fracture:
Back then, I created a new subsite of my web site dedicated to low-cost VR, with a detailed shopping list and detailed installation and configuration instructions. However, I did not update either one for a long time after, leading to a very outdated shopping list and installation instructions that were increasingly divergent from state-of-the-art approaches.
But that has changed recently. As part of an NSF-funded project on paleoceanography, we promised to install two such systems at our partner institutions, University of California, Santa Barbara, and Woods Hole Oceanographic Institution. I installed the first one a couple of months ago. Then, I currently have two exchange students from the University of Georgia (this Georgia, not that Georgia) who came here to learn how to build these systems in order to build one for their department at home. To train them, I rebuilt my own system from scratch, let them take the lead on rebuilding the one at our medical school, and right now they’re on the east coast to install the new system at WHOI.
Observing “newbies” following my guide trying to build a system from scratch allowed me to significantly improve the instructions, to the point that I believe they’re now comprehensive and can be followed by first-time builders with some computing knowledge. I also updated the shopping list to again represent a currently-available system, with current prices.
So the bottom line is that I now feel comfortable to let people go wild with the low-cost VR subsite and build their own display systems. If no existing equipment (computers, 3D TVs, …) can be used, a very nice, large (65″ TV), and powerful system can be built for around $7000, depending on daily deals. While not exactly cheap-cheap, one has to keep in mind that this is a professional-grade system, fit for scientific and other serious uses.
I should mention that we have an even lower-cost design, replacing the $3500 optical tracking system with a $150 Razer Hydra controller, but there’s a noticeable difference in functionality between the two. I should also mention that there’s a competing design, the IQ Station, but I believe that ours is better (and I’m not biased at all!).