Remote Collaborative Immersive Visualization

I spent the last couple of days at the first annual meeting of “The Higher Education Campus Alliance for Advanced Visualization” (THE CAAV), where folks managing or affiliated with advanced visualization centers such as KeckCAVES came together to share their experiences. During the talks, I saw slides showing Vrui‘s Collaboration Infrastructure pop up here and there, and generally remote collaboration was a big topic of discussion. During breaks, I showed several people the following video on my smartphone (yes, I finally joined the 21st century), and afterwards realized that I had never written a post about this work, as most of it predates this blog. So here we go.

Continue reading

On the road for VR: Silicon Valley Virtual Reality Conference & Expo

Yesterday, I attended the second annual Silicon Valley Virtual Reality Conference & Expo in San Jose’s convention center. This year’s event was more than three times bigger than last year’s, with around 1,400 attendees and a large number of exhibitors.

Unfortunately, I did not have as much time as I would have liked to visit and try all the exhibits. There was a printing problem at the registration desk in the morning, and as a result the keynote and first panel were pushed back by 45 minutes, overlapping the expo time; additionally, I had to spend some time preparing for and participating in my own panel on “VR Input” from 3pm-4pm.

The panel was great: we had Richard Marks from Sony (Playstation Move, Project Morpheus), Danny Woodall from Sixense (STEM), Yasser Malaika from Valve (HTC Vive, Lighthouse), Tristan Dai from Noitom (Perception Neuron), and Jason Jerald as moderator. There was lively discussion of questions posed by Jason and the audience. Here’s a recording of the entire panel:

One correction: when I said I had been following Tactical Haptics‘ progress for 2.5 years, I meant to say 1.5 years, since the first SVVR meet-up I attended. Brainfart. Continue reading

The effectiveness of minimalist avatars

I was reminded today of a recent thread on the Oculus subreddit, where a redditor relayed his odd experience remotely viewing his father driving a simulated racecar:

“I decided to spectate a race he was in. I then discovered I could watch him race from his passenger seat. in VR. in real time. I can’t even begin to explain the emotions i was feeling sitting in his car, in game, watching him race. I was in the car with him. … I looked over to ‘him’ and could see all his steering movements, exactly what he was doing. I pictured his intense face as he was pushing for 1st.”

I don’t know if this effect has a name, or even needs one, but it parallels something we’ve observed through our work with Immersive 3D Telepresence:
Continue reading

Messing around with 3D video

We had a couple of visitors from Intel this morning, who wanted to see how we use the CAVE to visualize and analyze Big Datatm. But I also wanted to show them some aspects of our 3D video / remote collaboration / tele-presence work, and since I had just recently implemented a new multi-camera calibration procedure for depth cameras (more on that in a future post), and the alignment between the three Kinects in the IDAV VR lab’s capture space is now better than it has ever been (including my previous 3D Video Capture With Three Kinects video), I figured I’d try something I hadn”t done before, namely remotely interacting with myself (see Figure 1).

Figure 1: How to properly pat yourself on the back using time-delayed 3D video.

Continue reading

On the road for VR: Silicon Valley Virtual Reality Conference & Expo

I just got back from the Silicon Valley Virtual Reality Conference & Expo in the awesome Computer History Museum in Mountain View, just across the street from Google HQ. There were talks, there were round tables, there were panels (I was on a panel on non-game applications enabled by consumer VR, livestream archive here), but most importantly, there was an expo for consumer VR hardware and software. Without further ado, here are my early reports on what I saw and/or tried.

Figure 1: Main auditorium during the “60 second” lightning pitches.

Continue reading

3D Video Capture with Three Kinects

I just moved all my Kinects back to my lab after my foray into experimental mixed-reality theater a week ago, and just rebuilt my 3D video capture space / tele-presence site consisting of an Oculus Rift head-mounted display and three Kinects. Now that I have a new extrinsic calibration procedure to align multiple Kinects to each other (more on that soon), and managed to finally get a really nice alignment, I figured it was time to record a short video showing what multi-camera 3D video looks like using current-generation technology (no, I don’t have any Kinects Mark II yet). See Figure 1 for a still from the video, and the whole thing after the jump.

Figure 1: A still frame from the video, showing the user’s real-time “holographic” avatar from the outside, providing a literal kind of out-of-body experience to the user.

Continue reading

VR Movies

There has been a lot of discussion about VR movies in the blogosphere and forosphere (just to pick two random examples), and even on Wired, recently, with the tenor being that VR movies will be the killer application for VR. There are even downloadable prototypes and start-up companies.

But will VR movies actually ever work?

This is a tricky question, and we have to be precise. So let’s first define some terms.

When talking about “VR movies,” people are generally referring to live-action movies, i.e., the kind that is captured with physical cameras and shows real people (well, actors, anyway) and environments. But for the sake of this discussion, live-action and pre-rendered computer-generated movies are identical.

We’ll also have to define what we mean by “work.” There are several things that people might expect from “VR movies,” but not everybody might expect the same things. The first big component, probably expected by all, is panoramic view, meaning that a VR movie does not only show a small section of the viewer’s field of view, but the entire sphere surrounding the viewer — primarily so that viewers wearing a head-mounted display can freely look around. Most people refer to this as “360° movies,” but since we’re all thinking 3D now instead of 2D, let’s use the proper 3D term and call them “4π sr movies” (sr: steradian), or “full solid angle movies” if that’s easier.

The second component, at least as important, is “3D,” which is of course a very fuzzy term itself. What “normal” people mean by 3D is that there is some depth to the movie, in other words, that different objects in the movie appear at different distances from the viewer, just like in reality. And here is where expectations will vary widely. Today’s “3D” movies (let’s call them “stereo movies” to be precise) treat depth as an independent dimension from width and height, due to the realities of stereo filming and projection. To present filmed objects at true depth and with undistorted proportions, every single viewer would have to have the same interpupillary distance, all movie screens would have to be the exact same size, and all viewers would have to sit in the same position relative the the screen. This previous post and video talks in great detail about what happens when that’s not the case (it is about head-mounted displays, but the principle and effects are the same). As a result, most viewers today would probably not complain about the depth in a VR movie being off and objects being distorted, but — and it’s a big but — as VR becomes mainstream, and more people experience proper VR, where objects are at 1:1 scale and undistorted, expectations will rise. Let me posit that in the long term, audiences will not accept VR movies with distorted depth.

Continue reading

Installing and running first Vrui applications

In my detailed how-to guide on installing and configuring Vrui for Oculus Rift and Razer Hydra, I did not talk about installing any actual applications (because I hadn’t released Vrui-3.0-compatible packages yet). Those are out now, so here we go.

Kinect

If you happen to own a Kinect for Xbox (Kinect for Windows won’t work), you might want to install the Kinect 3D Video package early on. It can capture 3D (holographic, not stereoscopic) video from one or more Kinects, and either play it back as freely-manipulable virtual holograms, or it can, after calibration, produce in-system overlays of the real world (or both). If you already have Vrui up and running, installation is trivial.

Continue reading

Kinect factory calibration

Boy, is my face red. I just uploaded two videos about intrinsic Kinect calibration to YouTube, and wrote two blog posts about intrinsic and extrinsic calibration, respectively, and now I find out that the factory calibration data I’ve always suspected was stored in the Kinect’s non-volatile RAM has actually been reverse-engineered. With the official Microsoft SDK out that should definitely not have been a surprise. Oh, well, my excuse is I’ve been focusing on other things lately.

So, how good is it? A bit too early to tell, because some bits and pieces are still not understood, but here’s what I know already. As I mentioned in the post on intrinsic calibration, there are several required pieces of calibration data:

  1. 2D lens distortion correction for the color camera.
  2. 2D lens distortion correction for the virtual depth camera.
  3. Non-linear depth correction (caused by IR camera lens distortion) for the virtual depth camera.
  4. Conversion formula from (depth-corrected) raw disparity values (what’s in the Kinect’s depth frames) to camera-space Z values.
  5. Unprojection matrix for the virtual depth camera, to map depth pixels out into camera-aligned 3D space.
  6. Projection matrix to map lens-corrected color pixels onto the unprojected depth image.

Continue reading

VR in the movies

I’m mad at the Onion A.V. Club right now (no, not really, I love those guys). In my post about the Leap Motion Leap I briefly mentioned my one gripe with the way VR is presented in Minority Report, and that I should write a post about it. That evolved into making a post on the larger topic of evaluating how realistic / crazy out there VR depictions in movies are in general, and when I opened the A.V. Club this morning to read my weekly dose of Babylon 5 reviews (oh yes, I am an unapologetic fan), I saw this: The future won’t look like this: 11 unintentionally ridiculous depictions of virtual reality. Curse you, A.V. Club!

On the danger of looking like a lame copycat, I’ll still do it, because the technical angle I had in mind is different from the A.V. Club’s approach, but if you disagree, tell me off in the comments.

Let’s get going, with a completely subjective selection, and in no particular order.

Star Wars, 1977

What? There’s no VR in it! True, but there are “holograms” in it. And because it’s an extremely common misconception, and I get it thrown at me all the time, I need to say it: real holograms don’t work that way! You know the scene I’m referring to:

The thing is that real holograms need to be “supported” by a piece of holographic screen behind them — you can only see the part of the hologram that’s between your eyes and the screen. Holograms are free-standing — just not as free-standing as most people subconsciously assume; holographic projectors such as R2-D2’s here are fiction. It’s important because the argument goes: once we get real-time holograms, we won’t need to build CAVEs anymore. Technically true, yes, but you’d need to build a space enclosed by holographic screens to get the same effect as a CAVE, so basically the same thing. Sorry.

Verdict: Fiction!

Disclosure, 1994

But this one’s in the A.V. Club article! True, and I feel bad about copying them even more blatantly. But I have to amend what they’re saying. I have no beef with their evaluation on the ridiculousness scale, but from a technical point of view, VR as depicted in Disclosure, at least in the following scene, exists and is used today:

Let’s see: tracked head-mounted display, tracked data glove, omni-directional treadmill, 3D scanner that captures a real-time 3D image of the user and projects it into the virtual space — I have all that in my lab, minus the treadmill (unfortunately). I’m even working with architecture firms. Walking across a virtual cathedral to access files, and a bottomless chasm in the middle of your database server for no reason? Yeah, that’s silly.

Verdict: Nailed it!

Minority Report, 2002

This one’s interesting. There are two VR bits in it: the famous “maestro-style” free-hand GUI, and the 3D home movie. Let’s tackle the simple one first, the 3D home movie:

The 3D video itself looks exactly like the kind of video you can capture with a 3D camera like the Kinect, down to the fringe triangle artifacts (someone on YouTube even made a mash-up between this and my first Kinect video; it’s uncanny). The projection system is another story: at first glance, it’s another completely free-standing hologram (fiction!), but a bit of fanwank can explain that it was actually a projection onto a 3D multi-viewpoint fog projection display (exists! just not quite as good yet).

Partial verdict: Nailed it!

The part with which I have a gripe is the 3D GUI:

From a technical point of view, we could have built that in 2002: tracked data gloves (had them in my lab in 1998, albeit with wires), projection onto a translucent screen (nothing to it), gesture interface, we could have rigged up a physical data transfer module (it’s basically a transparent USB stick, right?), etc.

So here’s my gripe: the whole thing makes no sense. Some people have issue with the manual data transfer — why not send the data over the network? — but you could fanwank that as a security issue. No, the problem is why use a 3D user interface in the first place? Look exactly at what he’s doing. All the data with which he’s interacting are 2D — text, images, movies. All the interactions are 2D: he moves and pinch-zooms, he rotates in the screen plane. Oh, and kind folks who did the annotation? It’s not a “holoscreen” — it only shows 2D images, so it’s simply a “screen.”

There is no free 3D manipulation, so why is he using a free 3D user interface? It’s bad, ergonomically. Holding your hands out like that for precise work over an extended time (more than a few minutes) is painful. The syndrome is called “Gorilla Arm.” The ideal hardware and UI for this type of work is a multi-touch surface device, probably set not vertically, but at an angle like a drafting table. Then your hands and fingers have something to rest on and push against for the interactions, which makes them much easier and less painful.

Why am I harping on this? People are rushing to recreate this interface, now that the hardware is cheaply available, because it looks extremely cool in the movie. It fooled me for the first two times watching. So people are working hard trying to make an interface that’s literally painful to use, and people actually trying it will hate it, and the backlash will hurt us all. Please, don’t do it.

Partial verdict: Nailed it technically, but failed ergonomics

Iron Man, 2008

This one I love:

It starts out like the Minority Report GUI, but then it gets good the moment the suit’s 3D model appears over the virtual workbench. I’m wondering if that’s intentional one-upmanship: start out just like the other, and then blow it away.

Anyway, let’s look at the technology: free-standing 3D display above a virtual workbench, hand tracking and gesture interpretation without data gloves. Pushing it, but we have the Kinect, we may soon have the Leap, and we can always imagine that he could be wearing stylish VR goggles in Tony Stark’s inimitable style. Or, alternatively, assuming that what we’re seeing in the movie is a representation of what Tony sees, and not what another person in the camera’s place would see, and the former could be only the part of the 3D model that’s between him and the workbench screen, which could be auto-stereoscopic, then it’s entirely today’s technology.

So with a bit of squinting and allowing for the Hollywood glitz filter, yes, we can build that. As for the interaction: tell me it doesn’t look exactly like this, again accounting for the glitz filter, and me using only one hand (we have a second input device now):

Now you might ask: why am I lauding free-space 3D interactions here, when I decried them in Minority Report? Simple, because here they are used for actual 3D manipulation, where you accept a bit of discomfort because there’s no better alternative. And you’ll also notice that he’s holding his arms in a more comfortable position, not at shoulder height (or only for as long as required to grab an object). That makes a huge difference, and it’s what our users do when they spend long hours in the CAVE.

Verdict: a bit shinier than what we can do today, but overall Nailed it!

Iron Man 2, 2010

Several scenes in this one. The first is the coffee table scene:

Pretty standard multi-touch surface display and interactions. Not really VR, as it’s all 2D, but worth a mention anyway. Verdict: Nailed it!

The workshop walk-through scene:

Similar to the scene from the first Iron Man, this one features completely free-standing 3D imagery, implied to be free-standing holograms, and therefore fiction. But in the context of the movie, it’s entirely possible that his entire workshop is panelled in auto-stereoscopic displays, and that the movie is only showing us what Tony sees. That could be done today, but it’s not close to practical, a huge stretch, and because of the common misconception about holograms, I’ll have to give it a demerit. Add to that the fact that the user interface here is a lot more “do what I mean” than in the first movie’s scene. There, the gestures he performs correspond directly enough to actions on the 3D model that a good 3D UI can explain it, but here it’s over the line. This UI, as depicted, can only work if a strong AI is running it. Since we already know that Tony employs a strong AI as an assistant, that makes sense in the context of the movie, but sadly it’s fiction.

Overall verdict: Fiction!

All right, that’s my list for now. I’m not going to touch the Matrix, Thirteenth Floor, eXistenz, et al., because those are obviously pure fiction. But if I forgot anything that deserves mention, because it depicts an internally consistent combination of display hardware and user interface that may or may not exist or be theoretically feasible, please let me know below. I have Netflix.

D’oh, I forgot one, especially embarrassing because I mentioned Babylon 5. How could I!

Babylon 5, And The Sky Full Of Stars, 1994

Can’t find a clip, but here’s the episode recap on the Lurker’s Guide. Synopsis: the station’s commander gets kidnapped and interrogated by being strapped into a virtual reality system, so that the interrogators can mindscrew him and break him more easily. The VR system itself is not thought through enough to be analyzed, except the display bit itself: it’s a retinal projector, shining the image of the virtual 3D world directly into the user’s eyes (only into one eye in the episode, sad oversight). Exists!

The input part of the system, on the other hand, must use some kind of neural interface, because the user (or captive in this case) can move inside the virtual world normally while being strapped into a chair in the real world, so Fiction!

How the interrogator, or the commander’s virtual body, get mapped into the virtual world is not even addressed, so Didn’t think about it!

Let’s just say I like this episode in spite of the VR stuff, not because of it. It’s just a TV show, after all.