A Follow-up on Eye Tracking

Now this is why I run a blog. In my video and post on the Oculus Rift’s internals, I talked about distortions in 3D perception when the programmed-in camera positions for the left and right stereo views don’t match the current left and right pupil positions, and how a “perfect” HMD would therefore need a built-in eye tracker. That’s still correct, but it turns out that I could have done a much better job approximating proper 3D rendering when there is no eye tracking.

This improvement was pointed out by a commenter on the previous post. TiagoTiago asked if it wouldn’t be better if the virtual camera were located at the centers of the viewer’s eyeballs instead of at the pupils, because then light rays entering the eye straight on would be represented correctly, independently of eye vergence angle. Spoiler alert: he was right. But I was skeptical at first, because, after all, that’s just plain wrong. All light rays entering the eye converge at the pupil, and therefore that’s the only correct position for the virtual camera.

Well, that’s true, but if the current pupil position is unknown due to lack of eye tracking, then the only correct thing turns into just another approximation, and who’s to say which approximation is better. My hunch was that the distortion effects from having the camera in the center of the eyeballs would be worse, but given that projection in an HMD involving a lens is counter-intuitive, I still had to test it. Fortunately, adding an interactive foveating mechanism to my lens simulation application was simple.

Turns out that I was wrong, and that in the presence of a collimating lens, i.e., a lens that is positioned such that the HMD display screen is in the lens’ focal plane, distortion from placing the camera in the center of the eyeball is significantly less pronounced than in my approach. Just don’t ask me to explain it for now — it’s due to the “special properties of the collimated light.” šŸ™‚

A Closer Look at the Oculus Rift

I have to make a confession: I’ve been playing with the Oculus Rift HMD for almost a year now, and have been supporting it in Vrui for a long time as well, but I haven’t really spent much time using it in earnest. I’m keenly aware of the importance of calibrating head-mounted displays, of course, and noticed right away that the scale of virtual objects seen through the Rift was way off for me, but I never got around to doing anything about it. Until now, that is.

Continue reading

The Holovision Kickstarter “scam”

Update: Please tear your eyes away from the blue lady and also read this follow-up post. It turns out things are worse than I thought. Now back to your regularly scheduled entertainment.

I somehow missed this when it was hot a few weeks or so ago, but I just found out about an interesting Kickstarter project: HOLOVISION — A Life Size Hologram. Don’t bother clicking the link, the project page has been taken down following a DMCA complaint and might not ever be up again.

Why do I think it’s worth talking about? Because, while there is an actual design for something called Holovision, and that design is theoretically feasible, and possibly even practical, the public’s impression of the product advertised on Kickstarter is decidedly not. The concept imagery associated with the Kickstarter project presents this feasible technology in a way that (intentionally?) taps into people’s misconceptions about holograms (and I’m talking about the “real” kind of holograms, those involving lasers and mirrors and beam splitters). In other words, it might not be a scam per se, and it might even be unintentional, but it is definitely creating a false impression that might lead to very disappointed backers.

Figure 1: This image is a blatant lie.

Continue reading

This is a post about Vrui

I just released version 3.0 of the Vrui VR toolkit. One of the major new features is native support for the Oculus Rift head-mounted display, including its low-latency inertial 3-DOF (orientation-only) tracker, and post-rendering lens distortion correction. So I thought it’s time for the first (really?) Vrui post in this venue.

What is Vrui, and why should I care?

Glad you’re asking. In a nutshell, Vrui (pronounced to start with vroom, and rhyme with gooey) is a high-level toolkit to develop highly interactive applications aimed at holographic (or fully-immersive, or VR, or whatever you want to call them) display environments. A large selection of videos showing many Vrui applications running in a wide variety of environments can be found on my YouTube channel. To you as a developer, this means you write your application once, and users can run it in any kind of environment without you having to worry about it. If new input or output hardware comes along, it’s Vrui’s responsibility to support it, not yours.

Continue reading

Vrui on (in?) Oculus Rift

I wrote about my first impressions of the Oculus Rift developer kit back in April, and since then I’ve been working (on and off) on getting it fully and natively supported in Vrui (see Figure 1 for proof that it works). Given that Vrui’s somewhat insane flexibility is a major point of pride for me, what was it that I actually had to create to support the Rift? Turns out, not all that much: a driver for the Rift’s built-in inertial tracking unit and a post-processing filter to correct for the Rift’s lens distortion were all it took (more on that later). So why did it take me this long? For one, I was mostly working on other things and only spent a few hours here and there, but more importantly, the Rift is not just a new head-mounted display (HMD), but a major shift in how HMDs are (or will be) used.

Figure 1: The trademark “double-barrel” Oculus Rift screenshot, this time generated by a Vrui application.

Continue reading

First impressions from the Oculus Rift dev kit

My friend Serban got his Oculus Rift dev kit in the mail today, and he called me over to check it out. I will hold back a thorough evaluation until I get the Rift supported natively in my own VR software, so that I can run a direct head-to-head comparison with my other HMDs, and also my screen-based holographic display systems (the head-tracked 3D TVs, and of course the CAVE), using the same applications. Specifically, I will use the Quake ||| Arena viewer to test the level of “presence” provided by the Rift; as I mentioned in my previous post, there are some very specific physiological effects brought out by that old chestnut, and my other HMDs are severely lacking in that department, and I hope that the Rift will push it close to the level of the CAVE. But here are some early impressions.

Figure 1: What it would look like to unbox an Oculus VR dev kit, if one were to have such a thing.

Continue reading

Behind the scenes: “Virtual Worlds Using Head-mounted Displays”

Virtual Worlds Using Head-mounted Displays” is the most complex video I’ve made so far, and I figured I should explain how it was done (maybe as a response to people who might say I “cheated”).

Continue reading

The reality of head-mounted displays

So it appears the Oculus Rift is really happening. A buddy of mine went in early on the kickstarter, and his will supposedly be in the mail some time this week. In a way the Oculus Rift, or, more precisely, the most recent foray of VR into the mainstream that it embodies, was the reason why I started this blog in the first place. I’m very much looking forward to it (more on that below), but I’m also somewhat worried that the huge level of pre-release excitement in the gaming world might turn into a backlash against VR in general. So I made a video laying out my opinions (see Figure 1, or the embedded video below).

Figure 1: Still from a video describing how head-mounted displays should be used to create convincing virtual worlds.

Continue reading

Is VR dead?

No, and it doesn’t even smell funny.

But let’s back up a bit. When it comes to VR, there are three prevalent opinions:

  1. It’s a dead technology. It had its day in the early nineties, and there hasn’t been anything new since. After all, the CAVE was invented in ’91 and is basically still the same, and head-mounted displays have been around even longer.
  2. It hasn’t been born yet. But maybe if we wait 10 more years, and there are some significant breakthroughs in display and computer technology, it might become interesting or feasible.
  3. It’s fringe technology. Some weirdos keep picking at it, but it hasn’t ever led to anything interesting or useful, and never will.

Continue reading

VR’s effects on game design

I’ve written at length (here, here, here, and here) about the challenges of properly supporting immersive displays such as CAVEs or HMDs such as the upcoming Oculus Rift, and the additional degrees of freedom introduced by 3D tracking.

I just found this interesting post by James Iliff, talking about the same general issue more from a game design than game implementation point of view.

Out of his three points, motion tracking, and the challenges posed by it, is the one most closely related to my own interests. The separation of viewing direction, aiming direction (as related to shooting games) and movement direction is something that falls naturally out of 3D tracking, and that needs to be implemented in VR applications or games at a fundamental level. Specifically, aiming using a tracked input device does, in my opinion, not work in the canonical architecture set up by existing desktop or console shooter games (see video below for an example).

My main concern with James’ post is the uncritical mention of the Razer Hydra controller. We are using those successfully ourselves (that’s a topic for another post), but it needs to be pointed out that we are using them differently than other tracked controllers. This is due to their lack of global precision: while the controllers are good at picking up relative motions (relative to their previous position, that is), they are not good at global positioning. What I mean is that the tracking coordinate system of the Hydra is non-linearly distorted, a very common effect with magnetic 3D trackers (also see Polhemus Fastrak or Ascension Flock of Birds for old-school examples). It is possible to correct for this non-linear distortion, but the problem we observed with the Hydra is that the distortion changes over relatively short time frames. What this means is that the Hydra is best not used as a 1:1 input device, where the position of the device in virtual space exactly corresponds to the position of the device in real space (see video below for how that works and looks like), but as an indirect device. Motions are still tracked more or less 1:1, but the device’s representation is offset from the physical device, and by a significant amount to prevent confusion. This has a direct impact on usability: instead of being able to use the physical device itself as an interaction cursor, embodying the “embodiment” principle (pun intended), the user has to work with an explicit virtual representation of the device instead. It still works (very well in fact), but it is a step down in immersion and effectiveness from globally-tracked input devices, such as the optically tracked Wiimote used in our low-cost VR system design.

And just because it’s topical and I’m a really big fan of Descent (after all, it is the highest form of patriotism!), here’s that old chestnut again:

Note how the CAVE wand is used as a “virtual gun,” and how the virtual gunsights are attached directly to the physical controller itself, not to a virtual representation of the physical controller. As far as the user is concerned, the CAVE wand is the gun. (The slight offset between controller and target reticle is primarily due to problems when setting up a CAVE for filming). This globally-precise tracking comes courtesy of the high-end Intersense IS-900 tracking system used in our CAVE, but we achieve the same thing with a (comparatively) low-cost NaturalPoint OptiTrack optical tracking system. The Hydra is a really good input device if treated properly, but it’s not the same thing.