Serial entrepreneur Elon Musk posted this double whammy of cryptic messages to his Twitter account on August 23rd:
@elonmusk: We figured out how to design rocket parts just w hand movements through the air (seriously). Now need a high frame rate holograph generator.
@elonmusk: Will post video next week of designing a rocket part with hand gestures & then immediately printing it in titanium
As there are no further details, and the video is now slightly delayed (per Twitter as of September 2nd: @elonmusk: Video was done last week, but needs more work. Aiming to publish link in 3 to 4 days.), it’s time to speculate! I was hoping to have seen the video by now, but oh well. Deadline is deadline.
First of all: what’s he talking about? My best guess is a free-hand, direct-manipulation, 6-DOF user interface for a 3D computer-aided design (CAD) program. In other words, something roughly like this (just take away the hand-held devices and substitute NURBS surfaces and rocket parts for atoms and molecules, but leave the interaction method and everything else the same):
I mentioned in a previous post that we took our AR Sandbox to the Augmented World Expo a while back. While there, I was asked to monologue about the sandbox for a video segment to be uploaded to the expo web site. Well, I just found the video on YouTube:
I just found this old photo on one of my cameras, and it’s too good not to share. It shows former master’s student Peter Gold (now in the PhD program at UT Austin) working with a high-resolution aerial LiDAR scan of the El Mayor-Cucapah fault rupture after the April 2010 earthquake (here is the full-resolution picture, for the curious).
Figure 1: Former master’s student Peter Gold in the CAVE, analyzing a high-resolution aerial LiDAR scan of the El Mayor-Cucapah fault rupture.
This article is related to VR in two ways. First, the usual progression of overhyping the capabilities of some new technology and then falling flat on one’s face because not even one’s own developers know what the new technology’s capabilities actually are is something that should be very familiar to anyone working in the VR field.
But here’s the quote that really got my interest (emphasis is mine):
Others recall worrying about the presentation not being live, and thinking people might assume it was fake. Milo worked well, they say, but filming someone playing produced an optical illusion where it looked like Milo was staring at the audience rather than the player. So for the presentation, the team hired an actress to record a version of the sequence that would look normal on camera, then had her pretend to play along with the recording. … “We brought [Claire] in fairly late, probably in the last two or three weeks before E3, because we couldn’t get it to [look right]” says a Milo team member. “And we said, ‘We can’t do this. We’re gonna have to make a video.’ So she acted to a video. “Was that obvious to you?” Following Molyneux’s presentation, fans picked apart the video, noting that it looked fake in certain places.
Gee, sounds familiar? This is, of course, the exact problem posed by filming a holographic display, and a person inside interacting with it. In a holographic display, the images on the screens are generated for the precise point of view of the person using it, not the camera. This means it looks wrong when filmed straight up. If, on the other hand, it’s filmed so it looks right on camera, then the person inside will have a very hard time using it properly. Catch 22.
With the “Milo” demo, the problem was similar. Because the game was set up to interact with whoever was watching it, it ended up interacting with the camera, so to speak, instead of with the player. Now, if the Milo software had been set up with the level of flexibility of proper VR software, it would have been an easy fix to adapt the character’s gaze direction etc. to a filming setting, but since game software in the past never had to deal with this kind of non-rigid environment, it typically ends up fully vertically integrated, and making this tiny change would probably have taken months of work (that’s kind of what I meant when I said “not even one’s own developers know what the new technology’s capabilities actually are” above). Am I saying that Milo failed because of the demo video? No. But I don’t think it helped, either.
The take-home message here is that mainstream games are slowly converging towards approaches that have been embodied in proper VR software for a long time now, without really noticing it, and are repeating old mistakes. The Oculus Rift will really bring that out front and center. And I am really hoping it won’t fall flat on its face simply because software developers didn’t do their homework.
So I decided a few days ago to dust off an old toy application (I showed it last in my 2007 Wiimote hacking video), a volumetric virtual “clay” modeler with real-time isosurface extraction for visualization, and run it with a Razer Hydra controller, which supports bi-manual 6-DOF interaction, a pretty ideal setup for this sort of thing:
Figure 1: Dawn Sumner, member of the NASA Curiosity Mars rover mission’s science team, interacting with a life-size 3D model of the rover in the UC Davis KeckCAVES holographic display environment. Still image taken from “The surface of Mars.”
The most interesting aspect of this talk, for me, was that the art project and all the software development for it, are done by the “other” part of the KeckCAVES project, the more mathematically/complex systems-aligned cluster around Jim Crutchfield of UC Davis‘ Complexity Sciences Center and his post-docs and graduate students. In practice, this means that I saw some of the software for the first time, and also heard about some problems the developers ran into that I was completely unaware of. This is interesting because it means that the Vrui VR toolkit, on which all this software is based, is maturing from a private pet project to something that’s actually being used by parties who are not directly collaborating with me.
The original stereo projection system, driven by a 2006 Mac Pro, was getting long in the tooth, and in the process of upgrading to higher-resolution and brighter projectors, we finally convinced the powers-that-be to get a top-of-the line Linux PC instead of yet another Mac (for significant savings, one might add). While the Ubuntu OS and Vrui application set had already been pre-installed by KeckCAVES staff in the home office, I still had to go up to the lake to configure the operating system and Vrui to render to the new projectors, update all Vrui software, align the projectors, and train the local docents in using Linux and the new Vrui application versions.
I’ve recentlyrealized that I should urgently write about LiDAR Viewer, a Vrui-based interactive visualization application for massive-scale LiDAR (Light Detection and Ranging, essentially 3D laser scanning writ large) data.
Figure 1: Photo of a user viewing, and extracting features from, an aerial LiDAR scan of the Cosumnes River area in central California in a CAVE.
I’ve also realized, after going to the ILMF ’13 meeting, that I need to make a new video about LiDAR Viewer, demonstrating the rendering capabilities of the current and upcoming versions. This occurred to me when the movie I showed during my talk had a copyright notice from 2006(!) on it.