A Blast From The Past

Back in the olden days, in the summer of 1996 to be precise, I was a computer science Master’s student at the University of Karlsruhe, Germany, about to take the oral exam in my specialization area, 3D computer graphics, 3D user interfaces, and geometric modeling. For reasons that are no longer entirely clear to me, I decided then that it would be a good idea to prepare for that exam by developing a 3D rendering engine, a 3D game engine, and a game, all from scratch. What resulted from that effort — which didn’t help my performance in that exam at all, by the way — was “Starglider Pro:”

In the mid to late 80s, one of my favorite games on my beloved Atari ST was the original Starglider, developed by Jez San for Rainbird Software. I finally replaced that ST with a series of PCs in 1993, first running DOS, and later OS/2 Warp, and therefore needed something to scratch that Starglider itch. Continue reading

The Display Resolution of Head-mounted Displays

What is the real, physical, display resolution of my VR headset?

I have written a long article about the optical properties of (then-)current head-mounted displays, one about projection and distortion in wide-FoV HMDs, and another one about measuring the effective resolution of head-mounted displays, but in neither one of those have I looked into the actual display resolution, in terms of hard pixels, of those headsets. So it’s about time.

The short answer is, of course, that it depends on your model of headset. But if you happen to have an HTC Vive, then have a look at the graphs in Figures 1 and 2 (the other headsets behave in the same way, but the actual numbers differ). Those figures show display resolution, in pixels/°, along two lines (horizontal and vertical, respectively) going through the center of the right lens of my own Vive. The red, green, and blue curves show resolution for the red, green, and blue primary colors, respectively, determined this time not by my own measurements, but by analyzing the display calibration data that is measured for each individual headset at the factory and then stored in its firmware.

Figure 1: Resolution in pixels/° along a horizontal line through my Vive’s right lens center, for each of its 1080 horizontal pixels, for the three primary colors (red, green, and blue).

Figure 2: Resolution in pixels/° along a vertical line through my Vive’s right lens center, for each of its 1200 vertical pixels, for the three primary colors (red, green, and blue).

At this point you might be wondering why those graphs look so strange, but for that you’ll have to read the long answer. Before going into that, I want to throw out a single number: at the exact center of my Vive’s right lens (at pixel 492, 602), the resolution for the green color channel is 11.42 pixels/°, in both the horizontal and vertical directions. If you wanted to quote a single resolution number for a headset, that’s the one I would go with, because it’s what you get when you look at something directly ahead and far away. However, as Figures 1 and 2 clearly show, no single number can tell the whole story.

And now for the long answer. Buckle in, Trigonometry and Calculus ahead. Continue reading

How Does VR Create the Illusion of Reality?

I’ve recently written a loose series of articles trying to explain certain technical aspects of virtual reality, such as what the lenses in VR headsets do, or why there is some blurriness, but I haven’t — or at least haven’t in a few years — tackled the big question:

How do all the technical components of VR headsets, e.g., screens, lenses, tracking, etc., actually come together to create realistic-looking virtual environments? Specifically, why do virtual environment in VR look more “real” compared to when viewed via other media, for example panoramic video?

The reason I’m bringing this up again is that the question keeps getting asked, and that it’s really kinda hard to answer. Most attempts to answer it fall back on technical aspects, such as stereoscopy, head tracking, etc., but I find that this approach somewhat misses the point by focusing on individual components, or at least gets mired in technical details that don’t make much sense to those who have to ask the question in the first place.

I prefer to approach the question from the opposite end: not through what VR hardware produces, but instead through how the viewer perceives 3D objects and/or environments, and how either the real world on the one hand, or virtual reality displays on the other, create the appropriate visual input to support that perception.

The downside with that approach is that it doesn’t lend itself to short answers. In fact, last summer, I gave a 25 minute talk about this exact topic at the 2016 VRLA Summer Expo. It may not be news, but I haven’t linked this video from here before, and it’s probably still timely:

Continue reading

Projection and Distortion in Wide-FoV HMDs

There is an on-going, but already highly successful, Kickstarter campaign for a new VR head-mounted display with a wide (200°) field of view (FoV): Pimax 8k. As I have not personally tried this headset — only its little brother, Pimax 4k, at the 2017 SVVR Expo — I cannot discuss and evaluate all the campaign’s promises. Instead, I want to focus on one particular issue that’s causing a bit of confusion and controversy at the moment.

Early reviewers of Pimax 8k prototypes noticed geometric distortion, such as virtual objects not appearing in the correct places and shifting under head movement, and the campaign responded by claiming that these distortions “could be fixed by improved software or algorithms” (paraphrased). The ensuing speculation about the causes of, and potential fixes for, this distortion has mostly been based on wrong assumptions and misunderstandings of how geometric projection for wide-FoV VR headsets is supposed to work. Adding fuel to the fire, the campaign released a frame showing “what is actually rendered to the screen” (see Figure 1), causing further confusion. The problem is that the frame looks obviously distorted, but that this obvious distortion is not what the reviewers were complaining about. On the contrary, this is what a frame rendered to a high-FoV VR headset should look like. At least, if one ignores lenses and lens distortion, which is what I will continue to do for now.

Figure 1: Frame as rendered to one of the Pimax 8k’s screens, according to the Kickstarter campaign. (Probably not 100% true, as this appears to be a frame submitted to SteamVR’s compositor, which subsequently applies lens distortion correction.)

Continue reading

Measuring the Effective Resolution of Head-mounted Displays

Why does everything in my VR headset look so pixelated? It’s supposed to be using a 2160×1200 screen, but my 1080p desktop monitor looks so much sharper!

This is yet another fundamental question about VR that pops up over and over again, and like the others I have addressed previously, it leads to interesting deeper observations. So, why do current-generation head-mounted displays appear so low-resolution?

Here’s the short answer: In VR headsets, the screen is blown up to cover a much larger area of the user’s field of vision than in desktop settings. What counts is not the total number of pixels, and especially not the display’s resolution in pixels per inch, but the resolution of the projected virtual image in pixels per degree, as measured from the viewer’s eyes. A 20″ desktop screen, when viewed from a typical distance of 30″, covers 37° of the viewer’s field of vision, diagonally. The screen (or screens) inside a modern VR headset cover a much larger area. For example, I measured the per-eye field of view of the HTC Vive as around 110°x113° under ideal conditions, or around 130° diagonally (it’s complicated), or three and a half times as much as that of the 20″ desktop monitor. Because a smaller number of pixels (1080×1200 per eye) is spread out over a much larger area, each pixel appears much bigger to the viewer.

Now for the long answer.

Continue reading

3D Camera Calibration for Mixed-Reality Recording

Mixed-reality recording, i.e., capturing a user inside of and interacting with a virtual 3D environment by embedding their real body into that virtual environment, has finally become the accepted method of demonstrating virtual reality applications through standard 2D video footage (see Figure 1 for a mixed-reality recording made in VR’s stone age). The fundamental method behind this recording technique is to create a virtual camera whose intrinsic parameters (focal length, lens distortion, …) and extrinsic parameters (position and orientation in space) exactly match those of the real camera used to film the user; to capture a virtual video stream from that virtual camera; and then to composite the virtual and real streams into a final video.

Figure 1: Ancient mixed-reality recording from inside a CAVE, captured directly on a standard video camera without any post-processing.

Continue reading

AltspaceVR Shutting Down

AltspaceVR, the popular virtual reality social platform, and the eponymous company behind it, will be closing their respective doors on August 3rd. This is surprising, as AltspaceVR has been around since 2013, was well-funded, had a good amount of users given VR’s still-niche status, and had apparently more funding lined up to continue operation and development of their platform (that funding falling through was, according to the announcement linked above, the primary reason for the impending shut-down).

But besides the direct impact on commercial VR as a whole, and the bad omen of a major player closing down, this is also personal to me. Not as a user of AltspaceVR’s service — I have to admit I’ve only tried it for minutes at a time at trade shows or conferences — but as someone who was, albeit tangentially, involved with the company and the people working there.

After having given a presentation at an early SVVR meet-up, I invited SVVR’s founder, Karl Krantz, to visit me at my VR lab at UC Davis. He made the trip a short while later, and brought a few friends, including “Cymatic” Bruce Wooden, Eric Romo, and Gavan Wilhite. I showed them our array of VR hardware, the general VR work we were doing, and specifically our work in VR tele-presence and remote collaboration. According to the people involved, AltspaceVR was founded during the drive back to the Bay Area.

In addition, I co-advised one of AltspaceVR’s developers when he was a PhD student at UC Davis, and I visited them in the summer of 2015 to give a talk about input device and interaction abstraction in multi-platform VR development. During that visit, Eric Romo also gave me my first taste of the newly-released HTC Vive Development Kit (Vive DK1).

For all that, I am sad to see them go under, and I wish everybody who is currently working there all the best for their future endeavors.

Possibly related to this, another piece of news surfaced today: AltspaceVR was named defendant in a patent infringement lawsuit filed by Virtual Immersion Technologies, LLC, regarding this 2002 patent. I do not know whether this filing was a cause in AltspaceVR’s closing, but it is possible that the prospect of a costly court case, or stiff licensing fees, led to some investors getting cold feet.

Either way, this patent deserves closer scrutiny as it is quite broad, and has recently changed ownership from the original inventors to the plaintiff, who has so far been using it exclusively to sue VR companies for infringement. The fact that it specifically claims the use of video to represent performers or users in a shared virtual space might mean that it covers platforms such as our tele-collaboration framework, which would be unfortunate. I have a hunch that this patent, due to its arguably broad applicability, will be the subject of a major legal battle in the near future, and while there is a lot of prior art in multiplayer/multi-user VR, that video component means I cannot dismiss the patent out of hand.

Accommodation and Vergence in Head-mounted Displays

Why do virtual objects close to my face appear blurry when wearing a VR headset? My vision is fine!

And why does the real world look strange immediately after a long VR session?

These are another two (relates ones) of those frequently-asked questions about VR and head-mounted displays (HMDs) that I promised to address a while back.

Here’s the short answer: In all currently-available HMDs, the screens creating the virtual imagery are at a fixed optical distance from the user. But our eyes have evolved to automatically adjust their optical focus based on the perceived distance to objects, virtual or real, that they are looking at. So when a virtual object appears to be mere inches in front of the user’s face, but the screens showing images of that object are — optically — several meters away, the user’s eyes will focus on the wrong distance, and as a result, the virtual object will appear blurry (the same happens, albeit less pronounced, when a virtual object appears to be very far away). This effect is called accommodation-vergence conflict, and besides being a nuisance, it can also cause eye strain or headaches during prolonged VR sessions, and can cause vision problems for a short while after such sessions.

Now for the long answer.

Continue reading

VR medical visualization with 3D Visualizer

Now that Vrui is working on the HTC Vive (at least until the next SteamVR update breaks ABI again), I can finally go back and give Vrui-based applications some tender loving care. First up is 3D Visualizer, an application to visualize and, more importantly, visually analyze three-dimensional volumetric data sets (see Figure 1).

Figure 1: Analyzing a CAT scan with 3D Visualizer on the HTC Vive. Cat included.

Continue reading

A HoloArticle

Here is an update on my quest to stay on top of all things “holo:” HoloLamp and RealView “Live Holography.” While the two have really nothing to do with each other, both claim the “holo” label with varying degrees of legitimacy, and happened to pop up recently.


At its core, HoloLamp is a projection mapping system somewhat similar to the AR Sandbox, i.e., a combination of a set of cameras scanning a projection surface and a viewer’s face, and a projector drawing a perspective-correct image, from the viewer’s point of view, onto said projection surface. The point of HoloLamp is to project images of virtual 3D objects onto arbitrary surfaces, to achieve effects like the Millenium Falcon’s holographic chess board in Star Wars: A New Hope. Let’s see how it works, and how it falls short of this goal.

Creating convincing virtual three-dimensional objects via projection is a core technology of virtual reality, specifically the technology that is driving CAVEs and other screen-based VR displays. To create this illusion, a display system needs to know two things: the exact position of the projection surface in 3D space, and the position of the viewer’s eyes in the same 3D space. Together, these two provide just the information needed to set up the correct perspective projection. In CAVEs et al., the position of the screen(s) is fixed and precisely measured during installation, and the viewer’s eye positions are provided via real-time head tracking.

As one goal of HoloLamp is portability, it cannot rely on pre-installation and manual calibration. Instead, HoloLamp scans and creates a 3D model of the projection surface when turned on (or asked to do so, I guess). It does this by projecting a sequence of patterns, and observing the perspective distortion of those patterns with a camera looking in the projection direction. This is a solid and well-known technology called structured-light 3D scanning, and can be seen in action at the beginning of this HoloLamp video clip. To extract eye positions, HoloLamp uses an additional set of cameras looking upwards to identify and track the viewer’s face, probably using off-the-shelf face tracking algorithms such as the Viola-Jones filter. Based on that, the software can project 3D objects using one or more projection matrices, depending on whether the projection surface is planar or not. The result looks very convincing when shot through a regular video camera:

Continue reading