It has been a very long time since I did the original optical measurement of then-current VR headsets. I have owned a PlayStation VR headset (PSVR from now on) for almost a year now, and I finally got around to measuring its optical properties in the same way. I also developed a new camera calibration algorithm (that’s a topic for another post), meaning I am even more confident in my measurements now than I was then.
One approach to measuring the optical properties of a VR headset, which includes measuring its field of view, its resolution in pixels/°, and its lens distortion correction profile, is to take a series of pictures of the headset’s screen(s) through its lenses using a calibrated wide-angle camera. In this context, a calibrated camera is one where each image pixel’s horizontal and vertical angles away from the optical axis are precisely known.
If one then displays a test pattern that lets one identify a particular pixel on the screen, one can measure the viewer-relative angular position of that pixel in the camera image, which is all the information needed to generate the projection matrices and lens distortion correction formulas that are essential to high-quality VR rendering.
Without further ado, here is a series of 7 images taken with the camera lens at increasing distances from the headset’s right lens (Figures 1-7, and yes, I forgot to clean my PSVR’s lens). The camera was carefully positioned and aligned such that it was sliding back along the lens’s optical axis, and looking straight ahead. The first image was captured with an eye relief value of 0mm, meaning that the camera lens was touching the headset’s lens. The rest of the images were captured with increasing eye relief values, or lens-lens distances, of 5mm, 10mm, 15mm, 20mm, 25mm, and 30mm:
Interestingly, the virtual image of the screen (see Head-mounted Displays and Lenses) becomes larger as eye relief increases from 0mm to 10mm, and then becomes smaller again as more parts of the screen become occluded by the rim of the lens. As a result, the eye relief value that maximizes visible field of view is 10mm.
Field Of View
The most common way to report field of view sizes is via horizontal and vertical extents. While the actual field of view is not a rectangle but instead a bowtie shape at close eye reliefs and a circle at farther eye reliefs, I nonetheless followed that approach by measuring the extents of the screen’s virtual image exactly to the left, right, bottom, and top of the optical axis (see Figure 8). In addition, these are also exactly the values needed to set up the headset’s projection matrix. The following table lists my measurements:
|Left Horizontal FoV||43.2°||45.1°||45.9°||42.3°||35.6°||30.7°|
|Right Horizontal FoV||44.7°||47.0°||48.1°||48.1°||43.7°||38.4°|
|Monocular Horizontal FoV||88.0°||92.1°||94.0°||90.5°||79.3°||69.1°|
|Total Horizontal FoV||89.4°||94.0°||96.2°||96.3°||87.4°||76.9°|
|Bottom Vertical FoV||50.6°||54.4°||56.8°||49.4°||43.5°||38.2°|
|Top Vertical FoV||48.4°||52.1°||54.5°||49.0°||43.0°||38.1°|
|Total Vertical FoV||99.0°||106.6°||111.4°||98.3°||86.5°||76.3°|
As can be seen in this table, at optimal eye relief, PSVR has one of the larger total fields of view among the common VR headsets at 96.2°x111.4°.
Given that PSVR has one of the larger fields of view among common headsets, and one of the lowest pixel counts at 960×1080 per eye, one would expect that it has one of the lower resolutions, where resolution, in this context, is angular resolution as seen from a user’s eyes, measured in pixels per degree.
One can calculate a headset’s resolution from a calibrated through-the-lens camera image by measuring the angular positions of two close display pixels, and then dividing the difference in angles by the pixels’ distance on the screen’s pixel grid. Given that the angular resolution of VR headsets changes with distance from the lens’s optical axis, this calculation should be done at or close to the optical axis.
I measured four pixel pairs around the lens center: two horizontal pairs (left and right of center), and two vertical pairs (top and bottom of center), and then averaged the results, yielding a resolution of 10.5 pixels/°. Given that the test pattern I used is entirely green, to avoid problems from chromatic aberration, this is the green-channel resolution at the center of the lens. For comparison, I measured the HTC Vive’s green-channel center-lens resolution as 11.43 pixels/° (110°x113° FoV, 1080×1200 pixels per eye), that of the Oculus Rift CV1 as 13.85 pixels/° (94°x93° FoV, 1080×1200 pixels per eye), and that of the HTC Vive Pro as 15.7 pixels/° (105°x108° FoV, 1440×1600 pixels per eye).
Due to chromatic aberration, or, more fundamentally, due to lenses bending light differently depending on the light’s wavelength, the resolution of PSVR’s red channel is slightly higher than 10.5 pixels/°, and the resolution of its blue channel is slightly lower, but not to an appreciable amount.
I mentioned above that PSVR’s red and blue channel resolutions are practically the same as the green channel resolution, and that is where comparisons to other headsets get interesting. Unlike the HTC Vive and Vive Pro and the Oculus Rift CV1 referenced above, PSVR has three sub-pixels (red, green, and blue) for each display pixel, while the others have only two sub-pixels per display pixel. Vive et al. use a so-called “PenTile RGBG” sub-pixel layout where every pixel has a green sub-pixel, every odd-numbered pixel has a red sub-pixel, and every even-numbered pixel has a blue sub-pixel (see Figure 9).
To demonstrate the visual difference between an RGB Stripe layout and a PenTile RGBG layout, consider the images in Figures 10-12. Figure 10 is a low-resolution (640×360 pixels) photograph of Half Dome, and Figures 11 and 12 are simulations of how that image would be represented on an RGB Stripe display and on a PenTile RGBG display, respectively. To that end, each pixel of the image in Figure 10 is blown up to a 3×3 pixel square, broken up into either three (Figure 11) or two (Figure 12) sub-pixels. For the best comparison, save the three images and display them full-screen on your monitor and cycle between them.
The reason why the image in Figure 10 appears brighter than the ones in Figures 11 and 12 is that the former has a full RGB color value for each pixel and uses all sub-pixels of your monitor, while the latter two simulate a lower-resolution display where each of your monitor’s pixels simulates one sub-pixel, meaning that only one color channel of each of your monitor’s pixels is lit up. Additionally, the slight tone difference between the RGB and RGBG images, when full-screened, are due to your monitor’s gamma response curve. There would be no tone difference between real RGB and RGBG displays.
The quantitative difference between a display with RGB Stripe sub-pixel layout and a display with PenTile RGBG sub-pixel layout is that in the latter, the resolution of the red and blue channels is reduced compared to the green channel, which has the same resolution as the green channel on the RGB Stripe display. In more detail, the red and blue sub-pixels form a checkerboard pattern, and therefore represent sub-pixel grids that are rotated by 45° relative to the green channel’s sub-pixel grid, and coarser by a factor of √2 because their grid lines are formed by the diagonals of the green grid (see Figures 13-15).
In other words, while, for example, the Vive’s green channel resolution is 11.43 pixels/°, the resolutions of its red and blue channels are 11.43/√2 pixels/° = 8.08 pixels/° (or 9.79 pixels/° for Rift CV1 or 11.10 pixels/° for Vive Pro).
What, then, is the “overall” resolution of a PenTile RGBG display? That, unfortunately, depends: if the displayed image is monochromatic green, the overall resolution is that of the green channel; if the image is monochromatic red or blue, the overall resolution is reduced by √2. If both the red and blue channels are used, the resolution increases because the two channels’ grids are offset with respect to each other, and if the green channel is added as well, the resolution increases again. The best way to fully compare between headsets with different sub-pixel layouts is to measure their resolutions as perceived by the user, but that is a somewhat more involved process. I recommend looking at Figures 10-12 again (saved and full-screened), and comparing the potential difference in detail between the primarily green image regions and the primarily white or blue image regions. If full-screened, the images in Figures 10-12 all have identical green channel resolutions.
From a purely subjective point of view, I would say that the PSVR’s image has about the same, or a slightly higher, overall resolution than the HTC Vive’s in typical circumstances (i.e., not looking at a monochromatic green image). In addition, PSVR exhibits noticeably less screen-door effect than HTC Vive, due to some extra magic Sony engineers added to the display system that is beyond the scope of this article.
Effects of Inter-Pupillary Distance
Unlike Vive, Vive Pro, and Rift CV1, PSVR has fixed lenses and a single screen, meaning there is no way to adjust to differences in users’ inter-pupillary distances (IPDs) in hardware. To evaluate the impact of different IPDs, I captured a sequence of calibrated through-the-lens images from off-center positions, at an eye relief value of 15mm, where the field of view is partially limited by the screen, and partially by the lens:
|Lens Center Offset||-4mm||-2mm||0mm||+2mm||+4mm|
|Left Horizontal FoV||32.9°||37.7°||42.3°||45.2°||45.8°|
|Right Horizontal FoV||48.4°||48.6°||48.1°||46.3°||43.0°|
|Monocular Horizontal FoV||81.2°||86.2°||90.5°||91.6°||88.8°|
|Total Horizontal FoV||96.7°||97.1°||96.3°||92.7°||86.0°|
As can be seen, the single-eye horizontal field of view is largest when the eye is about 2mm outside the optical axis, or, in other words, if the user’s IPD is 4mm larger than the PSVR’s lens distance. This is due to the asymmetric shape of the lens. For the same reason, total horizontal field of view stays about the same for smaller IPDs, and drops for larger IPDs. As would be expected, the binocular overlap between the eyes changes most consistently with increasing IPD. Note that for an IPD of 8mm above lens distance, binocular overlap is actually larger than total horizontal field of view. That means that the user’s left eye can see more towards the right than the user’s right eye, which, for me personally, leads to a strange claustrophobic effect, as if wearing blinders. The same was true for the Oculus Rift Development Kit 2, whose screen was too narrow for its lens separation. For many people, this led to the Rift DK2’s field of view feeling smaller than it actually was.