The reality of head-mounted displays

So it appears the Oculus Rift is really happening. A buddy of mine went in early on the kickstarter, and his will supposedly be in the mail some time this week. In a way the Oculus Rift, or, more precisely, the most recent foray of VR into the mainstream that it embodies, was the reason why I started this blog in the first place. I’m very much looking forward to it (more on that below), but I’m also somewhat worried that the huge level of pre-release excitement in the gaming world might turn into a backlash against VR in general. So I made a video laying out my opinions (see Figure 1, or the embedded video below).

Figure 1: Still from a video describing how head-mounted displays should be used to create convincing virtual worlds.

Before I start worrying, here is why I’m excited: I already have two HMDs, an eMagin Z800 Visor and a Sony HMZ-T1 (in the video above, I’m using the Z800). Using the Vrui VR toolkit, these HMDs (and the Oculus Rift as well once I get my hands on one) are drop-in replacements for each other and for projection-based holographic displays such as CAVEs. This makes it easy to compare the same applications in different displays. And some applications that feel very immersive in a CAVE, such as the Quake ||| Arena map viewer, feel much less so with my HMDs. Concretely, I am slightly afraid of heights, and when I step up to a ledge in a Quake map, my hands get clammy and my feet start to tingle; when I jump down, I feel my stomach lifting. There’s nothing I can do about it; I’ve done this for eight years, and it happens every time. I recently tried the same thing with an HMD, and nothing happened. Why? Not sure. My HMDs have narrow fields of view (around 45°), and I can see the real room I’m in from the corners of my eyes. With the Oculus Rift and its wide FOV, I’ll be able to investigate in more detail, and I’m optimistic that immersion will be improved, hopefully to the level of the CAVE.

So on to the worrying. In detail, I’m concerned about two things: one, that audiences who have never been exposed to VR or HMDs have unrealistic expectations of what the Oculus Rift can do, and will be disappointed when they see the real thing; two, that software ported from the desktop world will carry over baggage from that world that will lead to a sub-optimal experience. There are also some really basic things HMD-based 3D software could get wrong, but I’m hoping that a good high-level SDK will take care of those.

These are the three core issues (addressed in the video):

  1. The lack of positional head tracking in the Oculus Rift, at least out-of-the-box. Update: I have been corrected; the Oculus team plans to include some form of positional head tracking in the final consumer version that is planned to come out next year. That is very good news indeed; I’m waiting to hear more details.
  2. Controller-based navigation, as carried over from the desktop and its WASD paradigm.
  3. View-linked aiming, also carried over from the desktop.

The first in an expectation problem. When people hear about HMDs, they automatically assume it will support things like peeking around corners (at least the ones I’ve talked to), or looking at things from different angles, etc. Problem: without positional head tracking, that’s not possible. Now gamers might argue that that’s irrelevant — there are buttons on the controller for leaning etc. — but they’re missing an important fact: we move our heads subconsciously all the time. And if the virtual world presented in the HMD doesn’t react to that like the real world would, the illusion breaks down. As an aside, I actually know a good number of gamers that do “desktop gymnastics” when playing, so to speak. They lift out of their chair to peek over ledges, lean to look around corners, and all that without the game reacting in any way (I once had a roommate who broke a vase trying to dodge an incoming fireball in Doom). The world not reacting may be OK on a desktop (broken tchotchkes aside), but maybe not in an environment as convincing as what’s presented by an HMD — there it will lead to motion sickness. My advice: start thinking about aftermarket positional head trackers. For example, put some LED sockets in well-known positions onto the device, so that users can add their own LEDs and use a high-framerate camera like the Playstation Eye for DIY optical head tracking. Update: It appears my advice is too late; they’re already thinking about positional head trackers. More details TBD.

The second and third issues are user interface issues, and those need to be addressed by developers porting their games to HMDs. I’m not up-to-date on my demos, but what I’ve seen so far was not great, mostly with 1:1 ported desktop interfaces. The best thing (really the only good thing) I’ve seen so far is Project Holodeck, out of USC. They have position tracking, position- and orientation-tracked 3D input devices, i.e., all the good stuff that’s required to build a head-mounted VR environment that deserves the name. The fact that they’re directly talking to the people behind the Oculus Rift is very reassuring. I don’t know anything about Project Holodeck’s software stack, but I’m hoping that the VR nuts-and-bolts (virtual camera set-up, tracking, navigation, interaction, …) are not individually coded into each application, but instead provided by a shared high-level toolkit (I know I keep repeating myself).

So what’s to do? First, everybody involved needs to be a bit more grounded when describing the capabilities of HMDs in general, and the Oculus Rift in particular, to the world at large. The hype getting any more hype-y is not good for anyone. How to do that properly, for example, how to describe the admittedly subtle differences between levels of head tracking (none, orientation only, orientation+position) to a “lay” audience, I honestly don’t know. We’ll have to just keep trying.

Second, the VR community should reach out more to the game developer community. A lot of research has been done, and there’s no need to re-invent the wheel ten more times, or repeat old mistakes.

To wrap it up, and because this post ran long, I want to repeat my core message: I’m not concerned about the Oculus Rift display hardware at all. I’m very much looking forward to supporting it in the Vrui VR toolkit, and using it for my own research (and entertainment) in the future. I’m only worried about how it’s going to be used by games or other 3D software. What’s been holding VR back is not bad hardware, but bad software, and if the Oculus Rift ends up mostly exposing the world to bad VR software, we’re all in trouble.

35 thoughts on “The reality of head-mounted displays

  1. I tried suggesting to Sixense to offer their sensor tech to the Oculus Rift people some time ago; i’m not sure if my suggestion was taken seriously though…

    Btw, do you know if they fixed the gyro-drift issue i remember reading about in the kickstarter comments some months ago?

    • Sixense is *very* aware of the Oculus. So, if cooperation is at all possible, I’m sure they are on to it

    • I wasn’t aware of any specific problems, but drift is always an issue with inertial tracking. Gyro drift can be corrected by constantly aligning with the direction of gravity (supplied by the accelerometer triplet) and a north direction from a built-in compass. It all depends on the quality of components, so I’ll hold further speculation until I get my hands on a unit.

      Positional drift can’t be corrected without an external absolute tracking component, like the Playstation Eye camera with the Move’s glowing orb. I’m working on something like that.

    • They do have a magnetometer in the tracker, but as of the latest SDK release it is not used as part of their sensor fusion (i.e. yaw drift still occurs).

  2. Seems like they need to make a new “Power Glove” to go along with the HMD.
    The Gloves to have full position and orientation tracking as well as tracking finger movement. The controllers you are holding seem to work well.

    • I’m convinced it’s the basic approach of controllers not as “remote controls” but as proxies for the user’s hands in virtual worlds that makes VR software work. The exact hardware used for those proxies is somewhat secondary. The wand I have works very well, and I’m using a Nintendo Wiimote in my low-cost setups that is great too. We used to have data gloves with contact pads on the fingers for an even better embedding. Those were fantastic, but they were tethered and not robust enough.

      If someone were to make good wireless pinch data gloves, we’d be all over that.

  3. I still think the best option would be a Kinect + Oculus Rift combo. Not sure about the latency requirements though… would the Kinect be fast enough to do head/body tracking for VR use?

    • I don’t know. I have never used Kinects for skeletal tracking (I only use them for 3D video, like shown in the vid). I hear bad things about their tracking latency, but that’s only second hand.

      I think that LED and camera-based tracking would be ideal for head tracking, and that devices like the Playstation Move or the Razer Hydra would be good for hand tracking.

  4. Your video is technically amazing but I don’t understand reasons of your message to Oculus. Basically everything you have mentioned was already talked to death by Oculus. They literally say “do not buy the devkit” to hyped gamers, because they know it’s far from being a consumer quality product and they encourage waiting for a finished device.
    They plan to release Rift in Q3 2014 and new upgraded dev kit version before that, so just because first early dev kit lacks some features doesn’t mean it will be same in the actual device and we actually know that for almost a year.
    Positional tracking will be in Rift for sure. it was confirmed by Oculus many times months ago. They described it as a must have. They are testing many different hardware solutions not only for positional tracking but also for new input devices (gloves etc.).
    It won’t be easy because it has to be cheap and they can’t just resell Sony’s PS eye cameras. They want mass market appeal and to sell hundreds of thousands of units, so walking in a room is probably not their main target.

    People from Project Holodeck aren’t just talking to the Oculus. Palmer Luckey, Founder of Oculus who made the Rift prototype, is the lead hardware engineer on this project.
    Mentor of this project, Laird Malamed (former Activision SVP) is currently working at Oculus as chief operating officer.
    Chek out these photos form Oculus HQ:
    They have OptiTrack cameras there.

    • Fair enough. My message is not aimed at the Oculus people; I believe they know what they’re doing. But I’ve spoken to a good number of people from the general populace, so to speak, and also to game developers who know a lot about 3D in a desktop realm, but are new to VR. The former generally have expectations that are either way too high (it’s a Star Trek holodeck!) or way too low (it’s just a big monitor). I’m trying to tell in as much detail as I can what it can actually do right now, and what it will be able to do once some obvious modifications are made (primarily positional head tracking).

      My message to game developers is different. They need to appreciate that VR software by necessity works quite differently from desktop-based 3D software, and that it’s not that easy to just port a game from the latter realm to the former. Proper stereo rendering is only the most obvious, but not necessarily the most important difference. The danger I’m seeing is that developers underestimate the importance of avatar embodiment and multiple control schemes, by essentially applying desktop interaction methods. I think that’s what turned the Kinect into a disappointment for gaming.

      The additional freedom offered by positional head tracking means that avatars need to be handled completely differently. Unlike on the desktop, the software no longer completely controls the avatar’s position and orientation. What happens when the player “walks up” directly to a wall using some navigation method, and then simply leans forward? Those are the kinds of issues that need to be addressed. Multiple control schemes are a second danger, like I mention in this older post. There is no one size fits all, and games and other software need to provide many different interaction methods, and make them highly configurable.

      You’re right about Project Holodeck, though. I should have stated more clearly what their exact link to Oculus Rift is. Thanks for pointing that out.

  5. Just wanted to give you a heads up on the video here – the audio goes on past the end of the video 9:36 mark. You might want to check it and reupload it, since we can hear you cursing in the background about an apparent bug. 😀

    Great video though… I hadn’t really considered how important POSITIONAL head tracking would be, even though in retrospect it’s pretty obvious.

  6. Just thought you should know this statement you made is incorrect:

    “These are the three core issues (addressed in the video):

    The lack of positional head tracking in the Oculus Rift, at least out-of-the-box.”

    Out-of-the-box, the Oculus team has stated repeatedly, there will be positional head tracking. It’s always possible they will fail at this, but I would at least give them the benefit of the doubt this early into production.

    It is true that the current development kit does not, but that is not meant for end-users, so it’s a fairly misleading statement.

  7. I totally agree.

    Well, I am not a pro VR user/developer but: I think apart from positional head tracking, many issues here can be solved with game mechanics. For example rather than a game character running around with full body movement, piloting a mecha from a cockpit where you sit (with controllers) or a paralysed character like charles Xavier (as an example) or a ghost which can fly through walls. It is limiting but rather letting the limits set us back, finding ways around may be a nice challenge.

    Sorry for the English. I am not a native speaker.

  8. Great article. Your concerns are all very valid, I assure you that we are working as hard as we can to fix all of them; We won’t make a consumer launch until there is VR hardware and VR software working perfectly together.

    • I know you are on top of things. 🙂 Now we’ll have to make sure to get all the game developers, big and small, on board. By the way, my friend’s dev kit is already in Sacramento, so it will be unboxed tomorrow. Very much looking forward to it.

  9. Pingback: The Awesomeness Of Head-Mounted Displays | Rock, Paper, Shotgun

  10. This article is well thought out and thoroughly enjoyable to watch.
    I wanted to address what I am doing (albeit temporary fixes) to address exactly the issues you state.

    1. Positional tracking – At least in Unity there is a package from BitGym which gives a rough but fun to prototype with way to track the position of your head in 3D space by using the relative size and position of your head/body from video camera input.
    I believe this is enough to provide a stop gap until better sensor based solutions are available.

    Previously I tinkered with control of the character view using the PC camera. This should give you function the Oculus Rift can’t provide yet and I will add this later. The PC camera can tell the relative position of you head in 3D space. This will allow you to track your head in 3D space in addition to rotation.

    For an old example of this and if you have a working connected camera, see my old PC demo at

    2./3. Controller based navigation/View Linked Aiming – This really comes down to allowing choices as they have done in Team Fortress 2 Oculus Rift support and future support for Razr Hydra and devices like Leap Motion. Reference this article for their experience
    On a side note, I don’t think Leap Motion is useful for hand tracking unless you use the Oculus Rift (otherwise it is difficult to get a sense for where your hands actually are).
    Also, there are limitations to the area it tracks your hand in and actual skeletal tracking of them is not there yet.
    Previously I have tested Leap Motion controls scheme for control of in game character hand placement (inverse kinematics), playing drums with your fingers and ability to touch (virtually) a 3D menu system.
    Vid here

    I have not received my Oculus Rift but as position 447 in the queue it should be arriving soon ! That hasn’t stopped me from adding support without testing. I already have a demo here

  11. Pingback: Pew Pew | News: VR & Head-mounted Displays

  12. Pingback: First impressions from the Oculus Rift dev kit |

  13. An alternative for user-world interaction I have been interested in is Microsoft Research’s Digits project:

    Of course, this still lacks haptic feedback. I think the better approach at least for the near-term is to add in a level of abstraction to keep the interface out of the uncanny valley, i.e. joysticks and physical controllers. However, I think immersion displays definitely open up the field to a lot of new types of inputs and experiences that aren’t held back by the issues you mentioned (like you mentioned, it’s a bad software problem).

    One approach I’ve been experimenting is with EEG controllers like the Neurosky – currently feedback is too coarse for direct game actions, but I’ve been experimenting more with flow and global reactivity rather than specific reactions. I plan on trying to integrate the Oculus Rift with surround sound headphones, the Razer Hydra, an EEG device, a Buttkicker, possibly a FPS Vest and the Thalmic Labs Myo (if it ever comes out), and maybe even a rotating cockpit seat down the road.

    The fixed viewpoint and massive array of screens is nice for fixed-perspective games like cars and planes, but I still think the Rift could be the start of something special.

  14. Pingback: One Developers Take on the Oculus Rift Dev Kit | HUD Space

  15. Pingback: Oculus Rift needs positional tracking says VR expert | 3D printing, 3D TV, virtual reality, AR and Ultra HD news

  16. Pingback: The reality of head-mounted displays | Doc-Ok.o...

  17. Pingback: Augmented Reality | Pearltrees

  18. If you had a billion dollar research and development budget, how real could you make VR today and in the near future (say, less than 10 years from now)? Could you describe how likely it would be too fool a person who technology naive that the experience was ‘real’? By naive I mean having no experience with modern tech.

    Also, how much would you charge per hour or session for consulting? I am a recent college graduate trying to write a book in a genre that I invented. At minimum, no one has read a book like mine because its kinda crazy (but fun!). Do you know anyone that is willing to offer expert advice at around 100 dollars/hr or session?

    • Hi Matthew, it might come at a surprise, but the “realness” of virtual reality is not much of a concern for me. For me, the goal of VR is not to create the Matrix, but to create highly effective human-computer interfaces for applications dealing with three-dimensional data. For that, a totally convincing virtual environment is not only not necessary, but might even be detrimental. This puts me somewhat at odds with the majority of the VR research community. As a concrete example, in a totally convincing virtual environment, you would not be able to scale the environment; it would always be at 1:1. As a result, much software coming out of VR research ignores scaling, whereas we have found that scaling is a very efficient means of navigating through large and/or complex 3D data. Instead of working at 1:1, we primarily work such that the current “area of interest” is always human-sized, i.e., directly manipulable with the user’s hands, no matter the actual size of that area. Another component is that to be effective for data visualization and analysis, users must always be aware that they are working with data.

      I personally do not believe that Matrix-like reality can be achieved any time soon, and I don’t really care. 🙂 VR has been convincing enough for our purposes for at least a decade, and any further improvements are at most icing on the cake (again, for our purpose). “Convincing enough” means that the VR environment supports tasks that other human-computer interfaces don’t; specifically, the application of real-world skills to virtual environments. If the visual display is “real” enough to allow a trained geologist to make observations just like in the real field, then it doesn’t matter that the environment doesn’t reproduce the feel of the wind or the burning sun.

      Turns out that the human brain fills in a lot of context information that’s not actually provided by the environment, which helps. Concretely, there is a very strong feeling of falling when virtually jumping down a ledge in a 3D world, and highly interactive applications like the Nanotech Construction Kit provide a feeling of haptic feedback that’s not actually there. Meaning, when you grab and drag atoms, and other atoms pull on them, you can sort of feel that force, even though it’s definitely not there.

      I don’t do consulting for reasons I cannot get into, and unfortunately I can’t really point you to anybody else who would be able to help in what I think you want to achieve.

  19. Pingback: The reality of head-mounted displays |

  20. Here is a video response in which I demonstrate using the PC camera for positional head tracking, the oculus rift for 3D immersion and the leap motion for hand tracking.
    “Oculus Rift Leap Motion PC Camera Head Tracking Mirrored Reality Prototyping Game Demo”

  21. Pingback: Virtual Worlds Using Head-mounted Displays | 3D/VR/AR

  22. There are quite a few issues with this. Motion tracking, even just head tracking, is difficult because people are in a virtual world. If the world responds to their motion, they will inevitably start to dodge, duck, run or jump. If you allow them to move at all in the physical world, you place them at risk, people fall over in the real world. It is far safer to keep them in a chair.

    Next problem is that the virtual world has obstacles that don’t exist in the physical world, the simulation of the physical world in most games is pretty basic. The use of a controller allows for some disassociation of physical motion and the effects of the perceived environment. It provides an understandable approximation, like mouse cursor approximates a person hand movements. The cursor reaches the edge of the screen and stops even though your hand does not. We understand and accept the analogy because it is consist.

    Third problem, is that it wouldn’t necessarily be fun. People move around game worlds in unrealistic ways, they move too fast, too smoothly, jump and fall too far, they often steer by sliding along walls. All this allows them to perform like super athletes while moving them quickly to the next interesting event.

    Something occurred to me while watching this. The hand appears as a very low resolution, distorted pixel mass. It does not correspond visually to the world around it. However, the gun does fit very well. If you know how the users neck is turned, it should be possible to estimate the position of their shoulder. With the controller location you know where the hand is, with IK you can draw a reasonable arm into the view. The virtual arm would fit with the rest of the world much better.

    If Occulus Rift does catch on, I suspect the next step will be tracked controllers, rather than head motion. Pointing a gun is what people do in a game.

  23. Pingback: Game Engines and Positional Head Tracking |

  24. Pingback: VR developer explores ‘the reality of head-mounted displays’ with Doom 3 tech demo – GamesSentry

Leave a Reply

Your email address will not be published. Required fields are marked *