It’s a pretty big deal. Virtual Reality, especially its head-mounted implementation, is quite good at overriding its users’ sense of place and space. “Presence,” or the feeling of bodily being in a place where one knows to be not, is a powerful and compelling experience, but it has a downside: users experiencing it lose touch with their real physical environments. Exhibit A: Figure 1 (granted, there are some concerns that the following video clip was staged, but let’s pretend it’s for reals).
To prevent this kind of thing from happening — at least in most cases — Valve implemented a system called “Chaperone” into the SteamVR run-time framework that runs their and HTC’s Vive VR headset (and potentially other headsets, through Valve’s OpenVR layer). Continue reading →
I wasn’t able to talk about this before, but now I guess the cat’s out of the bag. About two years ago, we helped a team of archaeologists and filmmakers to visualize a very large high-resolution aerial LiDAR scan of a chunk of dense Honduran rain forest in the CAVE. Early analyses of the scan had found evidence of ruins hidden under the foliage, and using LiDAR Viewer in the CAVE, we were able to get a closer look. The team recently mounted an expedition, and found untouched remains of not one, but two lost cities in the jungle. Read more about it at National Geographic and The Guardian. I want to say something cool and Indiana Jones-like right now, but I won’t.
Figure 1: A “were-jaguar” effigy, likely representing a combination of a human and spirit animal, is part of a still-buried ceremonial seat, or metate, one of many artifacts discovered in a cache in ruins deep in the Honduran jungle. Photograph by Dave Yoder, National Geographic. Full-resolution image at National Geographic.
We had a couple of visitors from Intel this morning, who wanted to see how we use the CAVE to visualize and analyze Big Datatm. But I also wanted to show them some aspects of our 3D video / remote collaboration / tele-presence work, and since I had just recently implemented a new multi-camera calibration procedure for depth cameras (more on that in a future post), and the alignment between the three Kinects in the IDAV VR lab’s capture space is now better than it has ever been (including my previous 3D Video Capture With Three Kinects video), I figured I’d try something I hadn”t done before, namely remotely interacting with myself (see Figure 1).
Figure 1: How to properly pat yourself on the back using time-delayed 3D video.
There have been several discussions on the Oculus subreddit recently about how to integrate 2D desktops or 2D applications with 3D VR environments; for example, how to check your Facebook status while playing a game in the Oculus Rift without having to take off the headset.
This is just one aspect of the larger issue of integrating 2D and 3D applications, and it reminded me that it was about time to revive the old VR VNC client that Ed Puckett, an external contractor, had developed for the CAVE a long time ago. There have been several important changes in Vrui since the VNC client was written, especially in how Vrui handles text input, which means that a completely rewritten client could use the new Vrui APIs instead of having to implement everything ad-hoc.
Here is a video showing the new VNC client in action, embedded into LiDAR Viewer and displayed in a desktop VR environment using an Oculus Rift HMD, mouse and keyboard, and a Razer Hydra 6-DOF input device:
Update: Please tear your eyes away from the blue lady and also read this follow-up post. It turns out things are worse than I thought. Now back to your regularly scheduled entertainment.
I somehow missed this when it was hot a few weeks or so ago, but I just found out about an interesting Kickstarter project: HOLOVISION — A Life Size Hologram. Don’t bother clicking the link, the project page has been taken down following a DMCA complaint and might not ever be up again.
Why do I think it’s worth talking about? Because, while there is an actual design for something called Holovision, and that design is theoretically feasible, and possibly even practical, the public’s impression of the product advertised on Kickstarter is decidedly not. The concept imagery associated with the Kickstarter project presents this feasible technology in a way that (intentionally?) taps into people’s misconceptions about holograms (and I’m talking about the “real” kind of holograms, those involving lasers and mirrors and beam splitters). In other words, it might not be a scam per se, and it might even be unintentional, but it is definitely creating a false impression that might lead to very disappointed backers.
I just released version 3.0 of the Vrui VR toolkit. One of the major new features is native support for the Oculus Rift head-mounted display, including its low-latency inertial 3-DOF (orientation-only) tracker, and post-rendering lens distortion correction. So I thought it’s time for the first (really?) Vrui post in this venue.
What is Vrui, and why should I care?
Glad you’re asking. In a nutshell, Vrui (pronounced to start with vroom, and rhyme with gooey) is a high-level toolkit to develop highly interactive applications aimed at holographic (or fully-immersive, or VR, or whatever you want to call them) display environments. A large selection of videos showing many Vrui applications running in a wide variety of environments can be found on my YouTube channel. To you as a developer, this means you write your application once, and users can run it in any kind of environment without you having to worry about it. If new input or output hardware comes along, it’s Vrui’s responsibility to support it, not yours.
I just found this old photo on one of my cameras, and it’s too good not to share. It shows former master’s student Peter Gold (now in the PhD program at UT Austin) working with a high-resolution aerial LiDAR scan of the El Mayor-Cucapah fault rupture after the April 2010 earthquake (here is the full-resolution picture, for the curious).
Figure 1: Former master’s student Peter Gold in the CAVE, analyzing a high-resolution aerial LiDAR scan of the El Mayor-Cucapah fault rupture.
This article is related to VR in two ways. First, the usual progression of overhyping the capabilities of some new technology and then falling flat on one’s face because not even one’s own developers know what the new technology’s capabilities actually are is something that should be very familiar to anyone working in the VR field.
But here’s the quote that really got my interest (emphasis is mine):
Others recall worrying about the presentation not being live, and thinking people might assume it was fake. Milo worked well, they say, but filming someone playing produced an optical illusion where it looked like Milo was staring at the audience rather than the player. So for the presentation, the team hired an actress to record a version of the sequence that would look normal on camera, then had her pretend to play along with the recording. … “We brought [Claire] in fairly late, probably in the last two or three weeks before E3, because we couldn’t get it to [look right]” says a Milo team member. “And we said, ‘We can’t do this. We’re gonna have to make a video.’ So she acted to a video. “Was that obvious to you?” Following Molyneux’s presentation, fans picked apart the video, noting that it looked fake in certain places.
Gee, sounds familiar? This is, of course, the exact problem posed by filming a holographic display, and a person inside interacting with it. In a holographic display, the images on the screens are generated for the precise point of view of the person using it, not the camera. This means it looks wrong when filmed straight up. If, on the other hand, it’s filmed so it looks right on camera, then the person inside will have a very hard time using it properly. Catch 22.
With the “Milo” demo, the problem was similar. Because the game was set up to interact with whoever was watching it, it ended up interacting with the camera, so to speak, instead of with the player. Now, if the Milo software had been set up with the level of flexibility of proper VR software, it would have been an easy fix to adapt the character’s gaze direction etc. to a filming setting, but since game software in the past never had to deal with this kind of non-rigid environment, it typically ends up fully vertically integrated, and making this tiny change would probably have taken months of work (that’s kind of what I meant when I said “not even one’s own developers know what the new technology’s capabilities actually are” above). Am I saying that Milo failed because of the demo video? No. But I don’t think it helped, either.
The take-home message here is that mainstream games are slowly converging towards approaches that have been embodied in proper VR software for a long time now, without really noticing it, and are repeating old mistakes. The Oculus Rift will really bring that out front and center. And I am really hoping it won’t fall flat on its face simply because software developers didn’t do their homework.
Figure 1: Dawn Sumner, member of the NASA Curiosity Mars rover mission’s science team, interacting with a life-size 3D model of the rover in the UC Davis KeckCAVES holographic display environment. Still image taken from “The surface of Mars.”