A New AR Sandbox Support Forum

Apparently, the AR Sandbox is still a thing and going strong after ten years, with over 850 registered installations world-wide according to the AR Sandbox World Map. There was a lull in new installations and community activity during the initial COVID-19 lockdowns, but things are picking up again, and with that I am seeing an increasing amount of requests for help arriving in my personal email.

An AR Sandbox.

The old AR Sandbox support forum, which was quite active and significantly reduced my support load, not only by allowing me to answer common questions only once instead of dozens of times, but also by community members directly helping each other, unfortunately went down due to hardware problems a good while ago, and there is currently no avenue of getting it back up.

So I decided to create a new AR Sandbox support forum on this here web site, as a hopefully temporary replacement. I was not able to move over any of the old forum content due to not having access to the original database files, which is a major pity because there was a ton of helpful stuff on there. I am hoping that the new forum will accumulate its own set of helpful stuff quickly, and if/when I migrate the forum to a permanent location, I will be able to move all content because I have full access to this web site’s code and database. So here’s hoping.

This is the first forum on this web site, so I hope that things will work right from the start; if not, we’ll figure out how to fix it. Please be patient.

And as a quick reminder: These are the only official AR Sandbox installation instructions. Accept no substitutes.

A Question About VR Headset Resolution

I received a question via reddit a few moments ago, and I think the answer might be of general interest, so I decided to answer it here:

“Would you happen to know the effective or perceived resolution of the [Valve Index headset] when viewing a 50″ virtual screen from say.. 5 feet away? Do you think its equivalent to a 50″ 1080p tv from 5 ft away yet? I was also wondering why when I look at close up objects on the index that I can see basically no screen door effect, but when looking into the distance at the sky then suddenly the sde becomes very noticeable.”

Okay, so that’s actually two questions. Let’s start with the first one, and do the math.

The first thing we have to figure out is the resolution of a 50″ 1080p TV from 5 feet away. That’s pretty straightforward: a 1080p TV has 1920 pixels horizontally and 1080 pixels vertically. Meaning, it has √(19202 + 10802) = 2202.9 pixels along the diagonal, and – assuming the pixels are square – a pixel size of 50″/2202.9 = 0.0227″. Next we have to figure out the angle subtended by one of those pixels, when seen from 5 feet away. That’s α = tan-1(0.0227″/(5⋅12″)) = 0.0217°. Inverting that number yields the TV’s resolution as 46.14 pixels/°.

Figuring out a VR headset’s resolution is more complex, and I still haven’t measured a Valve Index, but I estimate its resolution in the forward direction somewhere around 15 pixels/°. That means the resolution of the hypothetical 50″ TV, viewed from 5 feet away, is approximately three times as high as the resolution of a Valve Index. The interested reader can simulate the perceived resolution of a VR headset of known resolution by following the steps in this article.

The second question is about screen-door effect (SDE). As shown in Figure 1, SDE is a high-frequency grid superimposed over a low-frequency (low-resolution) pixel grid, which makes it so noticeable and annoying. But why does it become less noticeable or even disappear when viewing virtual objects that are close to the viewer? That’s vergence-accommodation conflict rearing its typically ugly, but in this case beneficial, head. When viewing a close-by virtual object, the viewer’s eyes accommodate to focus on a close distance, but the virtual image shown by the VR headset is still at its fixed distance, somewhere around 1.5‒2m away depending on headset model. Meaning, the image will be somewhat blurred, and SDE, being a high-frequency signal, will be affected much more than the lower-frequency actual image signal.

Figure 1: Low resolution vs. screen-door effect (SDE). (Right-click and “view image” to see in full resolution.)

A Clarification About “Black Smear”

Here’s another frequently-asked question about VR headsets, or at least those that use LED-based displays:

Why does my headset show dark grey when it’s supposed to show black? Shouldn’t LED displays be able to show perfect blacks?

I addressed this in detail a long time ago, but the question keeps popping up, and it is often answered like the following: “LED display pixels have a memory effect when they are turned off completely, which causes ‘black smear.’ This can be avoided by never turning them off completely.”

Unfortunately, that answer is mostly wrong. LED display pixels do have a memory effect (for reasons too deep to get into right now), but it is not due to being turned off completely. The obvious counter argument is that, in the low-persistence displays used in all LED-based headsets, all display pixels are completely turned off for around 90% of the time anyway, no matter how brightly they are turned on during their short duty cycle. That’s what “low persistence” means. So having them completely turned off during their 1ms or so duty cycles as well won’t suddenly cause a memory effect.

The real answer is mathematics. In a slightly simplified model, the memory effect of LED displays has the following structure: if some pixel is set to brightness b1 in one frame, and set to brightness b2 in the next frame, it will only “move” by a certain fraction of the difference, i.e., its resulting effective brightness in the next frame will not be b2 = b1 + (b2 – b1), but b2‘ = b1 + (b2 – b1)⋅s, where s, the “smear factor,” is a number between zero and one (it’s usually around 0.9 or so).

For example, if b1 was 0.1 (let’s measure brightness from 0 = completely off to 1 = fully lit), b2 is 0.7, and s = 0.8, then the pixel’s effective brightness in frame 2 is b2‘ = 0.1 + (0.7 – 0.1)⋅0.8 = 0.58, so too dark by 17%. This manifests as a darkening of bright objects that move into previously dark areas (“black smear”). The opposite holds, too: if the pixel’s original brightness was b1 = 0.7, and its new intended brightness is b2 = 0.1, its effective new brightness is b2‘ = 0.7 + (0.1 – 0.7)⋅0.8 = 0.22, so too bright by 120%(!). This manifests as bright trails following bright objects moving over dark backgrounds (“white smear”).

The solution to black and white smear is to “overdrive” pixels from one frame to the next. If a pixel’s old brightness is b1, and its intended new brightness is b2, instead of setting the pixel to b2, it is set to an “overdrive brightness” bo calculated by solving the smear formula for value b2, where b2‘ is now the intended brightness: bo = (b2 – b1)/s + b1.

Let’s work through the two examples I used above: First, from dark to bright: b1 = 0.1, b2 = 0.7, and s = 0.8. That yields bo = (0.7 – 0.1)/0.8 + 0.1 = 0.85. Plugging bo = 0.85 into the smear formula as b2 yields b2‘ = 0.1 + (0.85 – 0.1)⋅0.8 = 0.7, as intended. Second, going from bright to dark: b1 = 0.7, b2 = 0.1, and s = 0.8 yields bo = (0.1 – 0.7)/0.8 + 0.7 = -0.05. Oops. In order to force a pixel that had brightness 0.7 on one frame to brightness 0.1 on the next frame, we would need to set the pixel’s brightness to a negative value. But that can’t be done, because pixel brightness values are limited to the interval [0, 1]. Ay, there’s the rub.

This is a fundamental issue, but there’s a workaround. If the range of intended pixel brightness values is limited from the full range of [0, 1] to the range [bmin, bmax], such that going from bmin to bmax will yield an overdrive brightness bo = 1, and going from bmax to bmin will yield an overdrive brightness bo = 0, then black and white smear can be fully corrected. The price for this workaround is paid on both ends of the range: the high brightness values (bmax, 1] can’t be used, meaning the display is a tad darker than physically possible (a negligible issue with bright LEDs), and the low brightness values [0, bmin) can’t be used, which is a bigger problem because it significantly reduces contrast ratio, which is a big selling point of LED displays in the first place, and means that surfaces intended to be completely black, such as night skies, will show up as dark grey.

Let’s close by working out bmin and bmax, which only depend on the smear factor s and can be derived from the two directions of the overdrive formula: 1 = (bmax – bmin)/s + bmin and 0 = (bmin – bmax)/s + bmax. Solving yields bmin = (1 – s)/(2 – s) and bmax = 1/(2 – s). Checking these results by calculating the overdrive values to go from bmin to bmax, which should be 1, and from bmax to bmin, which should be 0, is left as an exercise to the reader.

In a realistic example, using a smear factor of 0.9, the usable brightness range works out to [0.09, 0.91], meaning the darkest the display can be is 9% grey.

Quantitative Comparison of VR Headset Fields of View

Although I’ve taken many through-the-lens pictures of several common VR headsets with a calibrated wide-angle camera, until recently I was still struggling how to compare the resulting fields of view (FoV) quantitatively, how to put them in context, and how to visualize them appropriately. When trying to answer questions like “which VR headset has a bigger FoV?” or “by how much is headset A’s FoV bigger than headset B’s?” or “how does headset C’s FoV compare to the field of vision of the naked human eye?”, the basic question is: how does one even measure field of view, in a way that is fair and allows comparison across a wide range of possible sizes. Does one report a single angle, and how does one measure it? Across the “diagonal” of the field of view? What if the field of view is not a rectangle? Does one report a pair of angles, say horizontal⨉vertical? Again, what if the field of view is not a rectangle?

Then, if FoV is measured either as a single angle or a pair of angles, how does one compare different FoVs fairly? If one headset has 100° FoV, and another has 110°, does the latter show 10% more of a virtual 3D environment? What if one has 100°⨉100° and another has 110°⨉110°, does the latter show 21% more?

To find a reasonable answer, let’s go back to the basics: what does FoV actually measure? The general idea is that FoV measures how much of a virtual 3D environment a user can see at any given instant, meaning, without moving their head. A larger FoV value should mean that a user can see more, and, ideally, an FoV value that is twice as large should mean that a user can see twice as much.

Now, what does it mean that “something can be seen?” We can see something if light from that something reaches our eye, enters the eye through the cornea, pupil, and lens, and finally hits the retina. In principle, light travels towards our eyes from all possible directions, but only some of those directions end up on the retina due to various obstructions (we can’t see behind our heads, for example). So a reasonable measure of field of view (for one eye) would be the total number of different 3D directions from which light reaches that eye’s retina. The problem is that there is an infinite number of different directions from which light can arrive, so simple counting does not work.

Continue reading

Visual Simulation of the Resolution of VR Headsets

While I was writing up my article on the PlayStation VR headset’s optical properties yesterday, and specifically when I made the example images for the sub-section about sub-pixel layout comparing RGB Stripe and PenTile RGBG displays, it occurred to me that I could use those images to create a rough and simple simulator to visually evaluate the differences between VR headsets that have different resolutions and sub-pixel layouts.

The basic idea is straightforward: Take a test image that has some pixel count, for example WxH=640×360 as the initial low-resolution full-RGB picture seen in Figure 1. If you then blow up that image to fill a monitor that has the same aspect ratio (16:9) and some diagonal size D, the resolution of that image in terms of pixels per degree depends both on D and the viewer’s distance from the monitor Y: the larger the ratio Y/D, the higher is the image’s resolution as seen by the viewer. In detail, the formula for distance Y to achieve a desired resolution R in pixels/° is:

Y = (D / sqrt(W*W + H*H))/(2*tan(1/(2*R)))

where W and H are in pixel units, R is in pixels/°, and D is in some arbitrary length unit (inch, meter, parsec,…). Y will end up being in the same unit as D.

If a viewer then positions one of their eyes at a distance of Y from the center of the monitor and closes the other one, the resolution of the image on the monitor will be R. In other words, if R is the known resolution of some VR headset, the image on the monitor will look the same resolution as that VR headset.

There is one caveat: when looking at a flat monitor, the resolution of the displayed image increases away from the monitor’s center, while in a VR headset, the resolution generally decreases away from the center direction (see this article for reference). Meaning, for a correct evaluation, the viewer has to focus on the area in the center of the monitor. Unfortunately there is no easy way to simulate resolution drop-off using a flat monitor, at least not while also simulating sub-pixel layout.

Simulation Procedure

Here’s what you need to do: Continue reading

Field of View and Resolution of the PlayStation VR Headset

It has been a very long time since I did the original optical measurement of then-current VR headsets. I have owned a PlayStation VR headset (PSVR from now on) for almost a year now, and I finally got around to measuring its optical properties in the same way. I also developed a new camera calibration algorithm (that’s a topic for another post), meaning I am even more confident in my measurements now than I was then.

One approach to measuring the optical properties of a VR headset, which includes measuring its field of view, its resolution in pixels/°, and its lens distortion correction profile, is to take a series of pictures of the headset’s screen(s) through its lenses using a calibrated wide-angle camera. In this context, a calibrated camera is one where each image pixel’s horizontal and vertical angles away from the optical axis are precisely known.

If one then displays a test pattern that lets one identify a particular pixel on the screen, one can measure the viewer-relative angular position of that pixel in the camera image, which is all the information needed to generate the projection matrices and lens distortion correction formulas that are essential to high-quality VR rendering.

Without further ado, here is a series of 7 images taken with the camera lens at increasing distances from the headset’s right lens (Figures 1-7, and yes, I forgot to clean my PSVR’s lens). The camera was carefully positioned and aligned such that it was sliding back along the lens’s optical axis, and looking straight ahead. The first image was captured with an eye relief value of 0mm, meaning that the camera lens was touching the headset’s lens. The rest of the images were captured with increasing eye relief values, or lens-lens distances, of 5mm, 10mm, 15mm, 20mm, 25mm, and 30mm: Continue reading

KeckCAVES On Mars, pt. Oh-I-lost-count

Last weekend, we had yet another professional film crew visiting us to shoot video about our involvement in NASA’s still on-going Mars Science Laboratory (MSL, aka Curiosity rover) mission. This time, they were here to film parts of an upcoming 90-minute special about Mars exploration for the National Geographic TV channel. Like last time, the “star” of the show was Dawn Sumner, faculty in the UC Davis Department of Earth and Planetary Sciences, one of the founding members of KeckCAVES, and member of the MSL science team.

Unlike last time, we did not film in the KeckCAVES facility itself (due to the demise of our CAVE), but in the UC Davis ModLab. ModLab is part of an entirely different unit — UC Davis’s Digital Humanities initiative — but we are working closely with them on VR development, they have a nice VR environment consisting of two HTC Vive headsets and a large 4.2m x 2.4m screen with a ceiling-mounted ultra-short throw projector (see Figure 1), their VR hardware is running our VR software, and they were kind enough to let us use their space.

Figure 1: Preparation for filming in UC Davis’s ModLab, showing its 4.2m x 2.4m front-projected screen and ceiling-mounted ultra-short throw projector, and two Lighthouse base stations.

The fundamental idea here was to use several 3D models, created or reconstructed from real data sent back either by satellites orbiting Mars or by the Curiosity rover itself, as backdrops to let Dawn talk about the goals and results of the MSL mission, and her personal involvement in it. Figure 1 shows a backdrop in the real sense of the word, i.e., a 2D picture (a photo taken by Curiosity’s mast camera) with someone standing in front of it, but that was not the idea here (we didn’t end up using that photo). Instead, Dawn was talking while wearing a VR headset and interacting with the 3D models she was talking about, with a secondary view of the virtual world, from the point of view of the film camera, shown on the big screen behind her. More on that later. Continue reading

Welcome New VR Users!

Apparently, there were good sales numbers for VR equipment prior to the holiday season, and therefore a host of new VR users are coming in just about now. This meta-post collects a bunch of stuff I’ve written (or presented) in the past that might be of interest to some of those new users. These questions/answers are not hardware-specific, meaning they apply to any current-generation VR system (Oculus Rift, HTC Vive, all the Windows Mixed Reality headsets, PlayStation VR, …), and go beyond basic tech questions such as “how do I plug this in, install drivers, …).

There is one other issue for which I do not have a full article, but it’s quite important for new users: VR sickness (aka motion sickness, simulator sickness, …). Today’s VR headsets, at least the ones doing full head tracking (that means Rift/Vive et al., and not Gear VR, Oculus Go, Google Cardboard, …) should not cause VR sickness per se. These days, it is primarily caused by artificial locomotion in games or applications, as I explain in the second presentation I linked above.

The important message is: do not attempt to fight through VR sickness! If you try to stomach it out, it will only get worse. Stop using VR the moment you feel the first symptoms, take a long break, and then try again if you want to continue with the application/game that made you sick. If you try to power through repeatedly, your body might learn to associate sickness with VR, and that might cause you to get sick even when merely thinking about VR, or smelling the headset, or similar triggers. Just don’t do it.

That’s about it; now go ahead and enjoy your shiny new VR systems!

Want to Know More?

Here are a couple of other, more hardware-specific, topics:

The Display Resolution of Head-mounted Displays, Revisited

I wrote an article earlier this year in which I looked closely at the physical display resolution of VR headsets, measured in pixels/degree, and how that resolution changes across the field of view of a headset due to non-linear effects from tangent-space rendering and lens distortion. Unfortunately, back then I only did the analysis for the HTC Vive. In the meantime I got access to an Oculus Rift, and was able to extract the necessary raw data from it — after spending some significant effort, more on that later.

With these new results, it is time for a follow-up post where I re-create the HTC Vive graphs from the previous article with a new method that makes them easier to read, and where I compare display properties between the two main PC-based headsets. Without further ado, here are the goods.

HTC Vive

The first two figures, 1 and 2, show display resolution in pixels/°, on one horizontal and one vertical cross-section through the lens center of my Vive’s right display.

Figure 1: Display resolution in pixels/° along a horizontal line through the right display’s lens center of an HTC Vive.

Figure 2: Display resolution in pixels/° along a vertical line through the right display’s lens center of an HTC Vive.

Continue reading

The Display Resolution of Head-mounted Displays

What is the real, physical, display resolution of my VR headset?

I have written a long article about the optical properties of (then-)current head-mounted displays, one about projection and distortion in wide-FoV HMDs, and another one about measuring the effective resolution of head-mounted displays, but in neither one of those have I looked into the actual display resolution, in terms of hard pixels, of those headsets. So it’s about time.

The short answer is, of course, that it depends on your model of headset. But if you happen to have an HTC Vive, then have a look at the graphs in Figures 1 and 2 (the other headsets behave in the same way, but the actual numbers differ). Those figures show display resolution, in pixels/°, along two lines (horizontal and vertical, respectively) going through the center of the right lens of my own Vive. The red, green, and blue curves show resolution for the red, green, and blue primary colors, respectively, determined this time not by my own measurements, but by analyzing the display calibration data that is measured for each individual headset at the factory and then stored in its firmware.

Figure 1: Resolution in pixels/° along a horizontal line through my Vive’s right lens center, for each of its 1080 horizontal pixels, for the three primary colors (red, green, and blue).

Figure 2: Resolution in pixels/° along a vertical line through my Vive’s right lens center, for each of its 1200 vertical pixels, for the three primary colors (red, green, and blue).

At this point you might be wondering why those graphs look so strange, but for that you’ll have to read the long answer. Before going into that, I want to throw out a single number: at the exact center of my Vive’s right lens (at pixel 492, 602), the resolution for the green color channel is 11.42 pixels/°, in both the horizontal and vertical directions. If you wanted to quote a single resolution number for a headset, that’s the one I would go with, because it’s what you get when you look at something directly ahead and far away. However, as Figures 1 and 2 clearly show, no single number can tell the whole story.

And now for the long answer. Buckle in, Trigonometry and Calculus ahead. Continue reading