Visual Simulation of the Resolution of VR Headsets

While I was writing up my article on the PlayStation VR headset’s optical properties yesterday, and specifically when I made the example images for the sub-section about sub-pixel layout comparing RGB Stripe and PenTile RGBG displays, it occurred to me that I could use those images to create a rough and simple simulator to visually evaluate the differences between VR headsets that have different resolutions and sub-pixel layouts.

The basic idea is straightforward: Take a test image that has some pixel count, for example WxH=640×360 as the initial low-resolution full-RGB picture seen in Figure 1. If you then blow up that image to fill a monitor that has the same aspect ratio (16:9) and some diagonal size D, the resolution of that image in terms of pixels per degree depends both on D and the viewer’s distance from the monitor Y: the larger the ratio Y/D, the higher is the image’s resolution as seen by the viewer. In detail, the formula for distance Y to achieve a desired resolution R in pixels/° is:

Y = (D / sqrt(W*W + H*H))/(2*tan(1/(2*R)))

where W and H are in pixel units, R is in pixels/°, and D is in some arbitrary length unit (inch, meter, parsec,…). Y will end up being in the same unit as D.

If a viewer then positions one of their eyes at a distance of Y from the center of the monitor and closes the other one, the resolution of the image on the monitor will be R. In other words, if R is the known resolution of some VR headset, the image on the monitor will look the same resolution as that VR headset.

There is one caveat: when looking at a flat monitor, the resolution of the displayed image increases away from the monitor’s center, while in a VR headset, the resolution generally decreases away from the center direction (see this article for reference). Meaning, for a correct evaluation, the viewer has to focus on the area in the center of the monitor. Unfortunately there is no easy way to simulate resolution drop-off using a flat monitor, at least not while also simulating sub-pixel layout.

Simulation Procedure

Here’s what you need to do: Continue reading

Field of View and Resolution of the PlayStation VR Headset

It has been a very long time since I did the original optical measurement of then-current VR headsets. I have owned a PlayStation VR headset (PSVR from now on) for almost a year now, and I finally got around to measuring its optical properties in the same way. I also developed a new camera calibration algorithm (that’s a topic for another post), meaning I am even more confident in my measurements now than I was then.

One approach to measuring the optical properties of a VR headset, which includes measuring its field of view, its resolution in pixels/°, and its lens distortion correction profile, is to take a series of pictures of the headset’s screen(s) through its lenses using a calibrated wide-angle camera. In this context, a calibrated camera is one where each image pixel’s horizontal and vertical angles away from the optical axis are precisely known.

If one then displays a test pattern that lets one identify a particular pixel on the screen, one can measure the viewer-relative angular position of that pixel in the camera image, which is all the information needed to generate the projection matrices and lens distortion correction formulas that are essential to high-quality VR rendering.

Without further ado, here is a series of 7 images taken with the camera lens at increasing distances from the headset’s right lens (Figures 1-7, and yes, I forgot to clean my PSVR’s lens). The camera was carefully positioned and aligned such that it was sliding back along the lens’s optical axis, and looking straight ahead. The first image was captured with an eye relief value of 0mm, meaning that the camera lens was touching the headset’s lens. The rest of the images were captured with increasing eye relief values, or lens-lens distances, of 5mm, 10mm, 15mm, 20mm, 25mm, and 30mm: Continue reading

The Display Resolution of Head-mounted Displays, Revisited

I wrote an article earlier this year in which I looked closely at the physical display resolution of VR headsets, measured in pixels/degree, and how that resolution changes across the field of view of a headset due to non-linear effects from tangent-space rendering and lens distortion. Unfortunately, back then I only did the analysis for the HTC Vive. In the meantime I got access to an Oculus Rift, and was able to extract the necessary raw data from it — after spending some significant effort, more on that later.

With these new results, it is time for a follow-up post where I re-create the HTC Vive graphs from the previous article with a new method that makes them easier to read, and where I compare display properties between the two main PC-based headsets. Without further ado, here are the goods.

HTC Vive

The first two figures, 1 and 2, show display resolution in pixels/°, on one horizontal and one vertical cross-section through the lens center of my Vive’s right display.

Figure 1: Display resolution in pixels/° along a horizontal line through the right display’s lens center of an HTC Vive.

Figure 2: Display resolution in pixels/° along a vertical line through the right display’s lens center of an HTC Vive.

Continue reading

The Display Resolution of Head-mounted Displays

What is the real, physical, display resolution of my VR headset?

I have written a long article about the optical properties of (then-)current head-mounted displays, one about projection and distortion in wide-FoV HMDs, and another one about measuring the effective resolution of head-mounted displays, but in neither one of those have I looked into the actual display resolution, in terms of hard pixels, of those headsets. So it’s about time.

The short answer is, of course, that it depends on your model of headset. But if you happen to have an HTC Vive, then have a look at the graphs in Figures 1 and 2 (the other headsets behave in the same way, but the actual numbers differ). Those figures show display resolution, in pixels/°, along two lines (horizontal and vertical, respectively) going through the center of the right lens of my own Vive. The red, green, and blue curves show resolution for the red, green, and blue primary colors, respectively, determined this time not by my own measurements, but by analyzing the display calibration data that is measured for each individual headset at the factory and then stored in its firmware.

Figure 1: Resolution in pixels/° along a horizontal line through my Vive’s right lens center, for each of its 1080 horizontal pixels, for the three primary colors (red, green, and blue).

Figure 2: Resolution in pixels/° along a vertical line through my Vive’s right lens center, for each of its 1200 vertical pixels, for the three primary colors (red, green, and blue).

At this point you might be wondering why those graphs look so strange, but for that you’ll have to read the long answer. Before going into that, I want to throw out a single number: at the exact center of my Vive’s right lens (at pixel 492, 602), the resolution for the green color channel is 11.42 pixels/°, in both the horizontal and vertical directions. If you wanted to quote a single resolution number for a headset, that’s the one I would go with, because it’s what you get when you look at something directly ahead and far away. However, as Figures 1 and 2 clearly show, no single number can tell the whole story.

And now for the long answer. Buckle in, Trigonometry and Calculus ahead. Continue reading

How Does VR Create the Illusion of Reality?

I’ve recently written a loose series of articles trying to explain certain technical aspects of virtual reality, such as what the lenses in VR headsets do, or why there is some blurriness, but I haven’t — or at least haven’t in a few years — tackled the big question:

How do all the technical components of VR headsets, e.g., screens, lenses, tracking, etc., actually come together to create realistic-looking virtual environments? Specifically, why do virtual environment in VR look more “real” compared to when viewed via other media, for example panoramic video?

The reason I’m bringing this up again is that the question keeps getting asked, and that it’s really kinda hard to answer. Most attempts to answer it fall back on technical aspects, such as stereoscopy, head tracking, etc., but I find that this approach somewhat misses the point by focusing on individual components, or at least gets mired in technical details that don’t make much sense to those who have to ask the question in the first place.

I prefer to approach the question from the opposite end: not through what VR hardware produces, but instead through how the viewer perceives 3D objects and/or environments, and how either the real world on the one hand, or virtual reality displays on the other, create the appropriate visual input to support that perception.

The downside with that approach is that it doesn’t lend itself to short answers. In fact, last summer, I gave a 25 minute talk about this exact topic at the 2016 VRLA Summer Expo. It may not be news, but I haven’t linked this video from here before, and it’s probably still timely:

Continue reading

Projection and Distortion in Wide-FoV HMDs

There is an on-going, but already highly successful, Kickstarter campaign for a new VR head-mounted display with a wide (200°) field of view (FoV): Pimax 8k. As I have not personally tried this headset — only its little brother, Pimax 4k, at the 2017 SVVR Expo — I cannot discuss and evaluate all the campaign’s promises. Instead, I want to focus on one particular issue that’s causing a bit of confusion and controversy at the moment.

Early reviewers of Pimax 8k prototypes noticed geometric distortion, such as virtual objects not appearing in the correct places and shifting under head movement, and the campaign responded by claiming that these distortions “could be fixed by improved software or algorithms” (paraphrased). The ensuing speculation about the causes of, and potential fixes for, this distortion has mostly been based on wrong assumptions and misunderstandings of how geometric projection for wide-FoV VR headsets is supposed to work. Adding fuel to the fire, the campaign released a frame showing “what is actually rendered to the screen” (see Figure 1), causing further confusion. The problem is that the frame looks obviously distorted, but that this obvious distortion is not what the reviewers were complaining about. On the contrary, this is what a frame rendered to a high-FoV VR headset should look like. At least, if one ignores lenses and lens distortion, which is what I will continue to do for now.

Figure 1: Frame as rendered to one of the Pimax 8k’s screens, according to the Kickstarter campaign. (Probably not 100% true, as this appears to be a frame submitted to SteamVR’s compositor, which subsequently applies lens distortion correction.)

Continue reading

Measuring the Effective Resolution of Head-mounted Displays

Why does everything in my VR headset look so pixelated? It’s supposed to be using a 2160×1200 screen, but my 1080p desktop monitor looks so much sharper!

This is yet another fundamental question about VR that pops up over and over again, and like the others I have addressed previously, it leads to interesting deeper observations. So, why do current-generation head-mounted displays appear so low-resolution?

Here’s the short answer: In VR headsets, the screen is blown up to cover a much larger area of the user’s field of vision than in desktop settings. What counts is not the total number of pixels, and especially not the display’s resolution in pixels per inch, but the resolution of the projected virtual image in pixels per degree, as measured from the viewer’s eyes. A 20″ desktop screen, when viewed from a typical distance of 30″, covers 37° of the viewer’s field of vision, diagonally. The screen (or screens) inside a modern VR headset cover a much larger area. For example, I measured the per-eye field of view of the HTC Vive as around 110°x113° under ideal conditions, or around 130° diagonally (it’s complicated), or three and a half times as much as that of the 20″ desktop monitor. Because a smaller number of pixels (1080×1200 per eye) is spread out over a much larger area, each pixel appears much bigger to the viewer.

Now for the long answer.

Continue reading

Accommodation and Vergence in Head-mounted Displays

Why do virtual objects close to my face appear blurry when wearing a VR headset? My vision is fine!

And why does the real world look strange immediately after a long VR session?

These are another two (relates ones) of those frequently-asked questions about VR and head-mounted displays (HMDs) that I promised to address a while back.

Here’s the short answer: In all currently-available HMDs, the screens creating the virtual imagery are at a fixed optical distance from the user. But our eyes have evolved to automatically adjust their optical focus based on the perceived distance to objects, virtual or real, that they are looking at. So when a virtual object appears to be mere inches in front of the user’s face, but the screens showing images of that object are — optically — several meters away, the user’s eyes will focus on the wrong distance, and as a result, the virtual object will appear blurry (the same happens, albeit less pronounced, when a virtual object appears to be very far away). This effect is called accommodation-vergence conflict, and besides being a nuisance, it can also cause eye strain or headaches during prolonged VR sessions, and can cause vision problems for a short while after such sessions.

Now for the long answer.

Continue reading

A HoloArticle

Here is an update on my quest to stay on top of all things “holo:” HoloLamp and RealView “Live Holography.” While the two have really nothing to do with each other, both claim the “holo” label with varying degrees of legitimacy, and happened to pop up recently.

HoloLamp

At its core, HoloLamp is a projection mapping system somewhat similar to the AR Sandbox, i.e., a combination of a set of cameras scanning a projection surface and a viewer’s face, and a projector drawing a perspective-correct image, from the viewer’s point of view, onto said projection surface. The point of HoloLamp is to project images of virtual 3D objects onto arbitrary surfaces, to achieve effects like the Millenium Falcon’s holographic chess board in Star Wars: A New Hope. Let’s see how it works, and how it falls short of this goal.

Creating convincing virtual three-dimensional objects via projection is a core technology of virtual reality, specifically the technology that is driving CAVEs and other screen-based VR displays. To create this illusion, a display system needs to know two things: the exact position of the projection surface in 3D space, and the position of the viewer’s eyes in the same 3D space. Together, these two provide just the information needed to set up the correct perspective projection. In CAVEs et al., the position of the screen(s) is fixed and precisely measured during installation, and the viewer’s eye positions are provided via real-time head tracking.

As one goal of HoloLamp is portability, it cannot rely on pre-installation and manual calibration. Instead, HoloLamp scans and creates a 3D model of the projection surface when turned on (or asked to do so, I guess). It does this by projecting a sequence of patterns, and observing the perspective distortion of those patterns with a camera looking in the projection direction. This is a solid and well-known technology called structured-light 3D scanning, and can be seen in action at the beginning of this HoloLamp video clip. To extract eye positions, HoloLamp uses an additional set of cameras looking upwards to identify and track the viewer’s face, probably using off-the-shelf face tracking algorithms such as the Viola-Jones filter. Based on that, the software can project 3D objects using one or more projection matrices, depending on whether the projection surface is planar or not. The result looks very convincing when shot through a regular video camera:

Continue reading

Optical Properties of Current VR HMDs

With the first commercial version of the Oculus Rift (Rift CV1) now trickling out of warehouses, and Rift DK2, HTC Vive DK1, and Vive Pre already being in developers’ hands, it’s time for a more detailed comparison between these head-mounted displays (HMDs). In this article, I will look at these HMDs’ lenses and optics in the most objective way I can, using a calibrated fish-eye camera (see Figures 1, 2, and 3).

Figure 1: Picture from a fisheye camera, showing a checkerboard calibration target displayed on a 30" LCD monitor.

Figure 1: Picture from a fisheye camera, showing a checkerboard calibration target displayed on a 30″ LCD monitor.

Figure 2: Same picture as Figure 1, after rectification. The purple lines were drawn into the picture by hand to show the picture's linearity after rectification.

Figure 2: Same picture as Figure 1, after rectification. The purple lines were drawn into the picture by hand to show the picture’s linearity after rectification.

Figure 3: Rectified picture from Figure 2, re-projected into stereographic projection to simplify measuring angles.

Figure 3: Rectified picture from Figure 2, re-projected into stereographic projection to simplify measuring angles. Concentric purple circles indicate 5-degree increments away from the projection center point.

Continue reading