Figure 1: The zSpace holographic display and how it would really look like when seen from this point of view.
So I drove around the bay to get a close look at the zSpace, to determine its viability for my purpose. Bottom line, it will work (with some issues, more on that below). My primary concerns were threefold: head tracking precision and latency, stylus tracking precision and latency, and stereo quality (i.e., amount of crosstalk between the eyes).
My friend Serban got his Oculus Rift dev kit in the mail today, and he called me over to check it out. I will hold back a thorough evaluation until I get the Rift supported natively in my own VR software, so that I can run a direct head-to-head comparison with my other HMDs, and also my screen-based holographic display systems (the head-tracked 3D TVs, and of course the CAVE), using the same applications. Specifically, I will use the Quake ||| Arena viewer to test the level of “presence” provided by the Rift; as I mentioned in my previous post, there are some very specific physiological effects brought out by that old chestnut, and my other HMDs are severely lacking in that department, and I hope that the Rift will push it close to the level of the CAVE. But here are some early impressions.
Figure 1: What it would look like to unbox an Oculus VR dev kit, if one were to have such a thing.
I promised I would keep off-topic posts to a minimum, but I have to make an exception for this. I just found out that Roger Ebert died today, at age 70, after a long battle with cancer. This is very sad, and a great loss. There are three primary reasons why I have always stayed aware of Mr. Ebert’s output: I love movies, video games, and 3D, and he had strong opinions on all three of those areas.
When hearing about a movie, my first step is always the Internet Movie Database, and the second step is a click-through to Mr. Ebert’s review. While I didn’t always agree with his opinions, his reviews were always very useful in forming an opinion; and anyway, after having listened to his full-length commentary track on Dark City — something that everybody with even a remote interest in movies or science fiction should check out — he could do no wrong in my book.
I do not want to weigh in on the “video games as art” discussion, because that’s neither here nor there.
However, I do want to address Mr. Ebert’s opinions on stereoscopic movies (I’m not going to say 3D movies!), because that’s close to my heart (and this blog… hey, we’re on topic again!). In a nutshell, he did not like them. At all. And the thing is, I don’t really think they work either. Where I strongly disagreed with him is the reason why they don’t work. For Mr. Ebert, 3D itself was a fundamentally flawed idea in principle. For me, the currentimplementation of stereoscopy as seen in most movies is deeply flawed (am I going to see “Jurassic Park 3D?” Hell no!). What I’m saying is, 3D can be great; it’s just not done right in most stereoscopic movies, and maybe properly applying it will require a change in the entire idea of what a movie is. I always felt that the end goal of 3D movies should not be to watch the proceedings on a stereoscopic screen from far away, but to be in the middle the action, as in viewing a theater performance by being on stage amidst the actors.
I had always hoped that Mr. Ebert would at some point see how 3D is supposed to be, and then nudge movie makers towards that ideal. Alas, it was not to be.
So it appears the Oculus Rift is really happening. A buddy of mine went in early on the kickstarter, and his will supposedly be in the mail some time this week. In a way the Oculus Rift, or, more precisely, the most recent foray of VR into the mainstream that it embodies, was the reason why I started this blog in the first place. I’m very much looking forward to it (more on that below), but I’m also somewhat worried that the huge level of pre-release excitement in the gaming world might turn into a backlash against VR in general. So I made a video laying out my opinions (see Figure 1, or the embedded video below).
Figure 1: Still from a video describing how head-mounted displays should be used to create convincing virtual worlds.
This article is related to VR in two ways. First, the usual progression of overhyping the capabilities of some new technology and then falling flat on one’s face because not even one’s own developers know what the new technology’s capabilities actually are is something that should be very familiar to anyone working in the VR field.
But here’s the quote that really got my interest (emphasis is mine):
Others recall worrying about the presentation not being live, and thinking people might assume it was fake. Milo worked well, they say, but filming someone playing produced an optical illusion where it looked like Milo was staring at the audience rather than the player. So for the presentation, the team hired an actress to record a version of the sequence that would look normal on camera, then had her pretend to play along with the recording. … “We brought [Claire] in fairly late, probably in the last two or three weeks before E3, because we couldn’t get it to [look right]” says a Milo team member. “And we said, ‘We can’t do this. We’re gonna have to make a video.’ So she acted to a video. “Was that obvious to you?” Following Molyneux’s presentation, fans picked apart the video, noting that it looked fake in certain places.
Gee, sounds familiar? This is, of course, the exact problem posed by filming a holographic display, and a person inside interacting with it. In a holographic display, the images on the screens are generated for the precise point of view of the person using it, not the camera. This means it looks wrong when filmed straight up. If, on the other hand, it’s filmed so it looks right on camera, then the person inside will have a very hard time using it properly. Catch 22.
With the “Milo” demo, the problem was similar. Because the game was set up to interact with whoever was watching it, it ended up interacting with the camera, so to speak, instead of with the player. Now, if the Milo software had been set up with the level of flexibility of proper VR software, it would have been an easy fix to adapt the character’s gaze direction etc. to a filming setting, but since game software in the past never had to deal with this kind of non-rigid environment, it typically ends up fully vertically integrated, and making this tiny change would probably have taken months of work (that’s kind of what I meant when I said “not even one’s own developers know what the new technology’s capabilities actually are” above). Am I saying that Milo failed because of the demo video? No. But I don’t think it helped, either.
The take-home message here is that mainstream games are slowly converging towards approaches that have been embodied in proper VR software for a long time now, without really noticing it, and are repeating old mistakes. The Oculus Rift will really bring that out front and center. And I am really hoping it won’t fall flat on its face simply because software developers didn’t do their homework.
I went to the Sacramento Hacker Lab last night, to see a presentation by Intel about their soon-to-be-released “perceptual computing” hardware and software. Basically, this is Intel’s answer to the Kinect: a combined color and depth camera with noise- and echo-cancelling microphones, and an integrated SDK giving access to derived head tracking, finger tracking, and voice recording data.
Figure 1: What perceptual computing might look like at some point in the future, according to the overactive imaginations of Intel marketing people. Original image name: “Security Force Field.jpg” Oh, sure.
So I decided a few days ago to dust off an old toy application (I showed it last in my 2007 Wiimote hacking video), a volumetric virtual “clay” modeler with real-time isosurface extraction for visualization, and run it with a Razer Hydra controller, which supports bi-manual 6-DOF interaction, a pretty ideal setup for this sort of thing: