Elon Musk discovers AR/VR

Serial entrepreneur Elon Musk posted this double whammy of cryptic messages to his Twitter account on August 23rd:

@elonmusk: We figured out how to design rocket parts just w hand movements through the air (seriously). Now need a high frame rate holograph generator.

@elonmusk: Will post video next week of designing a rocket part with hand gestures & then immediately printing it in titanium

As there are no further details, and the video is now slightly delayed (per Twitter as of September 2nd: @elonmusk: Video was done last week, but needs more work. Aiming to publish link in 3 to 4 days.), it’s time to speculate! I was hoping to have seen the video by now, but oh well. Deadline is deadline.

First of all: what’s he talking about? My best guess is a free-hand, direct-manipulation, 6-DOF user interface for a 3D computer-aided design (CAD) program. In other words, something roughly like this (just take away the hand-held devices and substitute NURBS surfaces and rocket parts for atoms and molecules, but leave the interaction method and everything else the same):

Continue reading

When the novelty is gone…

I just found this old photo on one of my cameras, and it’s too good not to share. It shows former master’s student Peter Gold (now in the PhD program at UT Austin) working with a high-resolution aerial LiDAR scan of the El Mayor-Cucapah fault rupture after the April 2010 earthquake (here is the full-resolution picture, for the curious).

Figure 1: Former master’s student Peter Gold in the CAVE, analyzing a high-resolution aerial LiDAR scan of the El Mayor-Cucapah fault rupture.

Continue reading

How Milo met CAVE

I just read an interesting article, a behind-the-scenes look at the infamous “Milo” demo Peter Molyneux did at 2009’s E3 to introduce Project Natal, i.e., Kinect.

This article is related to VR in two ways. First, the usual progression of overhyping the capabilities of some new technology and then falling flat on one’s face because not even one’s own developers know what the new technology’s capabilities actually are is something that should be very familiar to anyone working in the VR field.

But here’s the quote that really got my interest (emphasis is mine):

Others recall worrying about the presentation not being live, and thinking people might assume it was fake. Milo worked well, they say, but filming someone playing produced an optical illusion where it looked like Milo was staring at the audience rather than the player. So for the presentation, the team hired an actress to record a version of the sequence that would look normal on camera, then had her pretend to play along with the recording. … “We brought [Claire] in fairly late, probably in the last two or three weeks before E3, because we couldn’t get it to [look right]” says a Milo team member. “And we said, ‘We can’t do this. We’re gonna have to make a video.’ So she acted to a video. “Was that obvious to you?” Following Molyneux’s presentation, fans picked apart the video, noting that it looked fake in certain places.

Gee, sounds familiar? This is, of course, the exact problem posed by filming a holographic display, and a person inside interacting with it. In a holographic display, the images on the screens are generated for the precise point of view of the person using it, not the camera. This means it looks wrong when filmed straight up. If, on the other hand, it’s filmed so it looks right on camera, then the person inside will have a very hard time using it properly. Catch 22.

With the “Milo” demo, the problem was similar. Because the game was set up to interact with whoever was watching it, it ended up interacting with the camera, so to speak, instead of with the player. Now, if the Milo software had been set up with the level of flexibility of proper VR software, it would have been an easy fix to adapt the character’s gaze direction etc. to a filming setting, but since game software in the past never had to deal with this kind of non-rigid environment, it typically ends up fully vertically integrated, and making this tiny change would probably have taken months of work (that’s kind of what I meant when I said “not even one’s own developers know what the new technology’s capabilities actually are” above). Am I saying that Milo failed because of the demo video? No. But I don’t think it helped, either.

The take-home message here is that mainstream games are slowly converging towards approaches that have been embodied in proper VR software for a long time now, without really noticing it, and are repeating old mistakes. The Oculus Rift will really bring that out front and center. And I am really hoping it won’t fall flat on its face simply because software developers didn’t do their homework.

Virtual clay modeling with 3D input devices

It’s funny, suddenly the idea of virtual sculpting or virtual clay modeling using 3D input devices is popping up everywhere. The developers behind the Leap Motion stated it as their inspiration to develop the device in the first place, and I recently saw a demo video; Sony has recently been showing it off as a demo for the upcoming Playstation 4; and I’ve just returned from an event at the Sacramento Hacker Lab, where Intel was trying to get developers excited about their version of the Kinect, or what they call “perceptual computing.” One of the demos they showed was — guess it — virtual sculpting (one other demo was 3D video of a person embedded into a virtual office, now where have I seen that before?)

So I decided a few days ago to dust off an old toy application (I showed it last in my 2007 Wiimote hacking video), a volumetric virtual “clay” modeler with real-time isosurface extraction for visualization, and run it with a Razer Hydra controller, which supports bi-manual 6-DOF interaction, a pretty ideal setup for this sort of thing:

Continue reading

Low-cost VR for materials science

In my ongoing series on VR’s stubborn refusal to just get on with it and croak already, here’s an update from the materials science front. Lilian Dávila, former UC Davis grad student and now professor at UC Merced, was recently featured in a three-part series about cutting-edge digital research at UC Merced, produced by the PR arm of the University of California. Here’s the 10-minute short focusing on her use of low-cost holographic displays for interactive design and analysis of nanostructures:

Continue reading

KeckCAVES on Mars, pt. 4

I mentioned before that we had a professional film crew in the CAVE a while back, to produce promotional video for the University of California‘s “Onward California” PR program. Finally, the finished videos have been posted on the Office of the President’s official YouTube channel. Unlike my own recent CAVE videos, these ones have excellent audio.

Figure 1: Dawn Sumner, member of the NASA Curiosity Mars rover mission’s science team, interacting with a life-size 3D model of the rover in the UC Davis KeckCAVES holographic display environment. Still image taken from “The surface of Mars.”

These short videos focus on Dawn Sumner, a professor in the UC Davis Department of Geology, and a KeckCAVES core member. This time, Dawn is wearing her hat as a planetary explorer and talking about NASA‘s Curiosity Mars rover mission, and her role in it.

Continue reading

Of CAVEs and Curiosity: Imaging and Imagination in Collaborative Research

On Monday, 03/04/2013, Dawn Sumner, one of KeckCAVES‘ core members, gave a talk in UC Berkeley‘s Art, Technology, and Culture lecture series, together with Meredith Tromble of the San Francisco Art Institute. The talk’s title was “Of CAVEs and Curiosity: Imaging and Imagination in Collaborative Research,” and it can be viewed online (1:12:55 total length, 50 minutes talk and 25 minutes lively discussion).

While the talk is primarily about the “Dream Vortex,” an evolving virtual reality art project led by Dawn and Meredith involving KeckCAVES hardware (CAVE and low-cost VR systems) and software, Dawn also touches on several of her past and present scientific (and art!) projects with KeckCAVES, including her work on ancient microbialites, exploration of live stromatolites in ice-covered lakes in Antarctica, our previous collaboration with performing artists, and — most recently — her leadership role with NASA‘s Curiosity Mars rover mission.

The most interesting aspect of this talk, for me, was that the art project and all the software development for it, are done by the “other” part of the KeckCAVES project, the more mathematically/complex systems-aligned cluster around Jim Crutchfield of UC DavisComplexity Sciences Center and his post-docs and graduate students. In practice, this means that I saw some of the software for the first time, and also heard about some problems the developers ran into that I was completely unaware of. This is interesting because it means that the Vrui VR toolkit, on which all this software is based, is maturing from a private pet project to something that’s actually being used by parties who are not directly collaborating with me.

On the road for VR part II: Tahoe Environmental Research Center, Incline Village, Lake Tahoe

We have been collaborating with the UC Davis Tahoe Environmental Research Center (TERC) for a long time. Back in — I think — 2006, we helped them purchase a large-screen stereoscopic projection system for the Otellini 3-D Visualization Theater and installed a set of Vrui-based KeckCAVES visualization applications for guided virtual tours of Lake Tahoe and the entire Earth. We have since worked on joint projects, primarily related to informal science education. Currently, TERC is one of the collaborators in the 3D lake science informal science education grant that spawned the Augmented Reality Sandbox.

The original stereo projection system, driven by a 2006 Mac Pro, was getting long in the tooth, and in the process of upgrading to higher-resolution and brighter projectors, we finally convinced the powers-that-be to get a top-of-the line Linux PC instead of yet another Mac (for significant savings, one might add). While the Ubuntu OS and Vrui application set had already been pre-installed by KeckCAVES staff in the home office, I still had to go up to the lake to configure the operating system and Vrui to render to the new projectors, update all Vrui software, align the projectors, and train the local docents in using Linux and the new Vrui application versions.

Continue reading

Meet the LiDAR Viewer

I’ve recently realized that I should urgently write about LiDAR Viewer, a Vrui-based interactive visualization application for massive-scale LiDAR (Light Detection and Ranging, essentially 3D laser scanning writ large) data.

Figure 1: Photo of a user viewing, and extracting features from, an aerial LiDAR scan of the Cosumnes River area in central California in a CAVE.

I’ve also realized, after going to the ILMF ’13 meeting, that I need to make a new video about LiDAR Viewer, demonstrating the rendering capabilities of the current and upcoming versions. This occurred to me when the movie I showed during my talk had a copyright notice from 2006(!) on it.

Continue reading